WorldWideScience

Sample records for underlying model assumptions

  1. The stable model semantics under the any-world assumption

    OpenAIRE

    Straccia, Umberto; Loyer, Yann

    2004-01-01

    The stable model semantics has become a dominating approach to complete the knowledge provided by a logic program by means of the Closed World Assumption (CWA). The CWA asserts that any atom whose truth-value cannot be inferred from the facts and rules is supposed to be false. This assumption is orthogonal to the so-called the Open World Assumption (OWA), which asserts that every such atom's truth is supposed to be unknown. The topic of this paper is to be more fine-grained. Indeed, the objec...

  2. Contextuality under weak assumptions

    International Nuclear Information System (INIS)

    Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D

    2017-01-01

    The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove

  3. Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption

    Directory of Open Access Journals (Sweden)

    Zheping Yan

    2014-01-01

    Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.

  4. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    Science.gov (United States)

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Testing the simplex assumption underlying the Sport Motivation Scale: a structural equation modeling analysis.

    Science.gov (United States)

    Li, F; Harmer, P

    1996-12-01

    Self-determination theory (Deci & Ryan, 1985) suggests that motivational orientation or regulatory styles with respect to various behaviors can be conceptualized along a continuum ranging from low (a motivation) to high (intrinsic motivation) levels of self-determination. This pattern is manifested in the rank order of correlations among these regulatory styles (i.e., adjacent correlations are expected to be higher than those more distant) and is known as a simplex structure. Using responses from the Sport Motivation Scale (Pelletier et al., 1995) obtained from a sample of 857 college students (442 men, 415 women), the present study tested the simplex structure underlying SMS subscales via structural equation modeling. Results confirmed the simplex model structure, indicating that the various motivational constructs are empirically organized from low to high self-determination. The simplex pattern was further found to be invariant across gender. Findings from this study support the construct validity of the SMS and have important implications for studies focusing on the influence of motivational orientation in sport.

  6. A Memory-Based Model of Posttraumatic Stress Disorder: Evaluating Basic Assumptions Underlying the PTSD Diagnosis

    Science.gov (United States)

    Rubin, David C.; Berntsen, Dorthe; Bohni, Malene Klindt

    2008-01-01

    In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., text rev.; American Psychiatric Association,…

  7. Discrete-State and Continuous Models of Recognition Memory: Testing Core Properties under Minimal Assumptions

    Science.gov (United States)

    Kellen, David; Klauer, Karl Christoph

    2014-01-01

    A classic discussion in the recognition-memory literature concerns the question of whether recognition judgments are better described by continuous or discrete processes. These two hypotheses are instantiated by the signal detection theory model (SDT) and the 2-high-threshold model, respectively. Their comparison has almost invariably relied on…

  8. Effect of grid resolution and subgrid assumptions on the model prediction of a reactive buoyant plume under convective conditions

    International Nuclear Information System (INIS)

    Chock, D.P.; Winkler, S.L.; Pu Sun

    2002-01-01

    We have introduced a new and elaborate approach to understand the impact of grid resolution and subgrid chemistry assumption on the grid-model prediction of species concentrations for a system with highly non-homogeneous chemistry - a reactive buoyant plume immediately downwind of the stack in a convective boundary layer. The Parcel-Grid approach plume was used to describe both the air parcel turbulent transport and chemistry. This approach allows an identical transport process for all simulations. It also allows a description of subgrid chemistry. The ambient and plume parcel transport follows the description of Luhar and Britter (Atmos. Environ, 23 (1989) 1911, 26A (1992) 1283). The chemistry follows that of the Carbon-Bond mechanism. Three different grid sizes were considered: fine, medium and coarse, together with three different subgrid chemistry assumptions: micro-scale or individual parcel, tagged-parcel (plume and ambient parcels treated separately), and untagged-parcel (plume and ambient parcels treated indiscriminately). Reducing the subgrid information is not necessarily similar to increasing the model grid size. In our example, increasing the grid size leads to a reduction in the suppression of ozone in the presence of a high-NO x stack plume, and a reduction in the effectiveness of the NO x -inhibition effect. On the other hand, reducing the subgrid information (by using the untagged-parcel assumption) leads to an increase in ozone reduction and an enhancement of the NO x -inhibition effect insofar as the ozone extremum is concerned. (author)

  9. Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

    International Nuclear Information System (INIS)

    Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David

    2002-01-01

    Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model

  10. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  11. Bank stress testing under different balance sheet assumptions

    OpenAIRE

    Busch, Ramona; Drescher, Christian; Memmel, Christoph

    2017-01-01

    Using unique supervisory survey data on the impact of a hypothetical interest rate shock on German banks, we analyse price and quantity effects on banks' net interest margin components under different balance sheet assumptions. In the first year, the cross-sectional variation of banks' simulated price effect is nearly eight times as large as the one of the simulated quantity effect. After five years, however, the importance of both effects converges. Large banks adjust their balance sheets mo...

  12. Forecasting Value-at-Risk under Different Distributional Assumptions

    Directory of Open Access Journals (Sweden)

    Manuela Braione

    2016-01-01

    Full Text Available Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. These features must be taken into account to produce accurate forecasts of Value-at-Risk (VaR. We provide a comprehensive look at the problem by considering the impact that different distributional assumptions have on the accuracy of both univariate and multivariate GARCH models in out-of-sample VaR prediction. The set of analyzed distributions comprises the normal, Student, Multivariate Exponential Power and their corresponding skewed counterparts. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures used to rank the different specifications. The results show the importance of allowing for heavy-tails and skewness in the distributional assumption with the skew-Student outperforming the others across all tests and confidence levels.

  13. Underlying assumptions and core beliefs in anorexia nervosa and dieting.

    Science.gov (United States)

    Cooper, M; Turner, H

    2000-06-01

    To investigate assumptions and beliefs in anorexia nervosa and dieting. The Eating Disorder Belief Questionnaire (EDBQ), was administered to patients with anorexia nervosa, dieters and female controls. The patients scored more highly than the other two groups on assumptions about weight and shape, assumptions about eating and negative self-beliefs. The dieters scored more highly than the female controls on assumptions about weight and shape. The cognitive content of anorexia nervosa (both assumptions and negative self-beliefs) differs from that found in dieting. Assumptions about weight and shape may also distinguish dieters from female controls.

  14. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Capturing Assumptions while Designing a Verification Model for Embedded Systems

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    A formal proof of a system correctness typically holds under a number of assumptions. Leaving them implicit raises the chance of using the system in a context that violates some assumptions, which in return may invalidate the correctness proof. The goal of this paper is to show how combining

  16. A framework for the organizational assumptions underlying safety culture

    International Nuclear Information System (INIS)

    Packer, Charles

    2002-01-01

    The safety culture of the nuclear organization can be addressed at the three levels of culture proposed by Edgar Schein. The industry literature provides a great deal of insight at the artefact and espoused value levels, although as yet it remains somewhat disorganized. There is, however, an overall lack of understanding of the assumption level of safety culture. This paper describes a possible framework for conceptualizing the assumption level, suggesting that safety culture is grounded in unconscious beliefs about the nature of the safety problem, its solution and how to organize to achieve the solution. Using this framework, the organization can begin to uncover the assumptions at play in its normal operation, decisions and events and, if necessary, engage in a process to shift them towards assumptions more supportive of a strong safety culture. (author)

  17. Unrealistic Assumptions in Economics: an Analysis under the Logic of Socioeconomic Processes

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2014-11-01

    Full Text Available The realism of assumptions is an ongoing debate within the philosophy of economics. One of the most referenced papers in this matter belongs to Milton Friedman. He defends the use of unrealistic assumptions, not only because of a pragmatic issue, but also the intrinsic difficulties of determining the extent of realism. On the other hand, realists have criticized (and still do today the use of unrealistic assumptions - such as the assumption of rational choice, perfect information, homogeneous goods, etc. However, they did not accompany their statements with a proper epistemological argument that supports their positions. In this work it is expected to show that the realism of (a particular sort of assumptions is clearly relevant when examining economic models, since the system under study (the real economies is not compatible with logic of invariance and of mechanisms, but with the logic of possibility trees. Because of this, models will not function as tools for predicting outcomes, but as representations of alternative scenarios, whose similarity to the real world will be examined in terms of the verisimilitude of a class of model assumptions

  18. Consequences of Violated Equating Assumptions under the Equivalent Groups Design

    Science.gov (United States)

    Lyren, Per-Erik; Hambleton, Ronald K.

    2011-01-01

    The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…

  19. Climate change scenarios in Mexico from models results under the assumption of a doubling in the atmospheric CO{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, V.M.; Villanueva, E.E.; Garduno, R.; Adem, J. [Centro de Ciencias de la Atmosfera, Mexico (Mexico)

    1995-12-31

    General circulation models (GCMs) and energy balance models (EBMs) are the best way to simulate the complex large-scale dynamic and thermodynamic processes in the atmosphere. These models have been used to estimate the global warming due to an increase of atmospheric CO{sub 2}. In Japan Ohta with coworkers has developed a physical model based on the conservation of thermal energy applied to pounded shallow water, to compute the change in the water temperature, using the atmospheric warming and the precipitation due to the increase in the atmospheric CO{sub 2} computed by the GISS-GCM. In this work, a method similar to the Ohta`s one is used for computing the change in ground temperature, soil moisture, evaporation, runoff and dryness index in eleven hydrological zones, using in this case the surface air temperature and precipitation due to CO{sub 2} doubling, computed by the GFDLR30-GCM and the version of the Adem thermodynamic climate model (CTM-EBM), which contains the three feedbacks (cryosphere, clouds and water vapor), and does not include water vapor in the CO{sub 2} atmospheric spectral band (12-19{mu})

  20. Climate change scenarios in Mexico from models results under the assumption of a doubling in the atmospheric CO{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, V M; Villanueva, E E; Garduno, R; Adem, J [Centro de Ciencias de la Atmosfera, Mexico (Mexico)

    1996-12-31

    General circulation models (GCMs) and energy balance models (EBMs) are the best way to simulate the complex large-scale dynamic and thermodynamic processes in the atmosphere. These models have been used to estimate the global warming due to an increase of atmospheric CO{sub 2}. In Japan Ohta with coworkers has developed a physical model based on the conservation of thermal energy applied to pounded shallow water, to compute the change in the water temperature, using the atmospheric warming and the precipitation due to the increase in the atmospheric CO{sub 2} computed by the GISS-GCM. In this work, a method similar to the Ohta`s one is used for computing the change in ground temperature, soil moisture, evaporation, runoff and dryness index in eleven hydrological zones, using in this case the surface air temperature and precipitation due to CO{sub 2} doubling, computed by the GFDLR30-GCM and the version of the Adem thermodynamic climate model (CTM-EBM), which contains the three feedbacks (cryosphere, clouds and water vapor), and does not include water vapor in the CO{sub 2} atmospheric spectral band (12-19{mu})

  1. Uncertainties in sandy shorelines evolution under the Bruun rule assumption

    Directory of Open Access Journals (Sweden)

    Gonéri eLe Cozannet

    2016-04-01

    Full Text Available In the current practice of sandy shoreline change assessments, the local sedimentary budget is evaluated using the sediment balance equation, that is, by summing the contributions of longshore and cross-shore processes. The contribution of future sea-level-rise induced by climate change is usually obtained using the Bruun rule, which assumes that the shoreline retreat is equal to the change of sea-level divided by the slope of the upper shoreface. However, it remains unsure that this approach is appropriate to account for the impacts of future sea-level rise. This is due to the lack of relevant observations to validate the Bruun rule under the expected sea-level rise rates. To address this issue, this article estimates the coastal settings and period of time under which the use of the Bruun rule could be (invalidated, in the case of wave-exposed gently-sloping sandy beaches. Using the sedimentary budgets of Stive (2004 and probabilistic sea-level rise scenarios based on IPCC, we provide shoreline change projections that account for all uncertain hydrosedimentary processes affecting idealized coasts (impacts of sea-level rise, storms and other cross-shore and longshore processes. We evaluate the relative importance of each source of uncertainties in the sediment balance equation using a global sensitivity analysis. For scenario RCP 6.0 and 8.5 and in the absence of coastal defences, the model predicts a perceivable shift toward generalized beach erosion by the middle of the 21st century. In contrast, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. Finally, the contribution of sea-level rise and climate change scenarios to sandy shoreline change projections uncertainties increases with time during the 21st century. Our results have three primary implications for coastal settings similar to those provided described in Stive (2004 : first, the validation of the Bruun rule will not necessarily be

  2. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    Science.gov (United States)

    Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.

    2017-02-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA

  3. Weak convergence of Jacobian determinants under asymmetric assumptions

    Directory of Open Access Journals (Sweden)

    Teresa Alberico

    2012-05-01

    Full Text Available Let $\\Om$ be a bounded open set in $\\R^2$ sufficiently smooth and $f_k=(u_k,v_k$ and $f=(u,v$ mappings belong to the Sobolev space $W^{1,2}(\\Om,\\R^2$. We prove that if the sequence of Jacobians $J_{f_k}$ converges to a measure $\\mu$ in sense of measures andif one allows different assumptions on the two components of $f_k$ and $f$, e.g.$$u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,2}(\\Om \\qquad \\, v_k \\rightharpoonup v \\;\\;\\mbox{weakly in} \\;\\; W^{1,q}(\\Om$$for some $q\\in(1,2$, then\\begin{equation}\\label{0}d\\mu=J_f\\,dz.\\end{equation}Moreover, we show that this result is optimal in the sense that conclusion fails for $q=1$.On the other hand, we prove that \\eqref{0} remains valid also if one considers the case $q=1$, but it is necessary to require that $u_k$ weakly converges to $u$ in a Zygmund-Sobolev space with a slightly higher degree of regularity than $W^{1,2}(\\Om$ and precisely$$ u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,L^2 \\log^\\alpha L}(\\Om$$for some $\\alpha >1$.    

  4. Assumptions behind size-based ecosystem models are realistic

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.

    2016-01-01

    A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed by Fro...... that there is indeed a constructive role for a wide suite of ecosystem models to evaluate fishing strategies in an ecosystem context...

  5. Models for waste life cycle assessment: Review of technical assumptions

    DEFF Research Database (Denmark)

    Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky

    2010-01-01

    A number of waste life cycle assessment (LCA) models have been gradually developed since the early 1990s, in a number of countries, usually independently from each other. Large discrepancies in results have been observed among different waste LCA models, although it has also been shown that results...... from different LCA studies can be consistent. This paper is an attempt to identify, review and analyse methodologies and technical assumptions used in various parts of selected waste LCA models. Several criteria were identified, which could have significant impacts on the results......, such as the functional unit, system boundaries, waste composition and energy modelling. The modelling assumptions of waste management processes, ranging from collection, transportation, intermediate facilities, recycling, thermal treatment, biological treatment, and landfilling, are obviously critical when comparing...

  6. Investigation of assumptions underlying current safety guidelines on EM-induced nerve stimulation

    Science.gov (United States)

    Neufeld, Esra; Vogiatzis Oikonomidis, Ioannis; Iacono, Maria Ida; Angelone, Leonardo M.; Kainz, Wolfgang; Kuster, Niels

    2016-06-01

    An intricate network of a variety of nerves is embedded within the complex anatomy of the human body. Although nerves are shielded from unwanted excitation, they can still be stimulated by external electromagnetic sources that induce strongly non-uniform field distributions. Current exposure safety standards designed to limit unwanted nerve stimulation are based on a series of explicit and implicit assumptions and simplifications. This paper demonstrates the applicability of functionalized anatomical phantoms with integrated coupled electromagnetic and neuronal dynamics solvers for investigating the impact of magnetic resonance exposure on nerve excitation within the full complexity of the human anatomy. The impact of neuronal dynamics models, temperature and local hot-spots, nerve trajectory and potential smoothing, anatomical inhomogeneity, and pulse duration on nerve stimulation was evaluated. As a result, multiple assumptions underlying current safety standards are questioned. It is demonstrated that coupled EM-neuronal dynamics modeling involving realistic anatomies is valuable to establish conservative safety criteria.

  7. Oil production, oil prices, and macroeconomic adjustment under different wage assumptions

    International Nuclear Information System (INIS)

    Harvie, C.; Maleka, P.T.

    1992-01-01

    In a previous paper one of the authors developed a simple model to try to identify the possible macroeconomic adjustment processes arising in an economy experiencing a temporary period of oil production, under alternative wage adjustment assumptions, namely nominal and real wage rigidity. Certain assumptions were made regarding the characteristics of actual production, the permanent revenues generated from that oil production, and the net exports/imports of oil. The role of the price of oil, and possible changes in that price was essentially ignored. Here we attempt to incorporate the price of oil, as well as changes in that price, in conjunction with the production of oil, the objective being to identify the contribution which the price of oil, and changes in it, make to the adjustment process itself. The emphasis in this paper is not given to a mathematical derivation and analysis of the model's dynamics of adjustment or its comparative statics, but rather to the derivation of simulation results from the model, for a specific assumed case, using a numerical algorithm program, conducive to the type of theoretical framework utilized here. The results presented suggest that although the adjustment profiles of the macroeconomic variables of interest, for either wage adjustment assumption, remain fundamentally the same, the magnitude of these adjustments is increased. Hence to derive a more accurate picture of the dimensions of adjustment of these macroeconomic variables, it is essential to include the price of oil as well as changes in that price. (Author)

  8. Being Explicit about Underlying Values, Assumptions and Views when Designing for Children in the IDC Community

    DEFF Research Database (Denmark)

    Skovbjerg, Helle Marie; Bekker, Tilde; Barendregt, Wolmet

    2016-01-01

    In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....

  9. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  10. On the validity of Brownian assumptions in the spin van der Waals model

    International Nuclear Information System (INIS)

    Oh, Suhk Kun

    1985-01-01

    A simple Brownian motion theory of the spin van der Waals model, which can be stationary, Markoffian or Gaussian, is studied. By comparing the Brownian motion theory with an exact theory called the generalized Langevin equation theory, the validity of the Brownian assumptions is tested. Thereby, it is shown explicitly how the Markoffian and Gaussian properties are modified in the spin van der Waals model under the influence of quantum fluctuations and long range ordering. (Author)

  11. Allele Age Under Non-Classical Assumptions is Clarified by an Exact Computational Markov Chain Approach.

    Science.gov (United States)

    De Sanctis, Bianca; Krukov, Ivan; de Koning, A P Jason

    2017-09-19

    Determination of the age of an allele based on its population frequency is a well-studied problem in population genetics, for which a variety of approximations have been proposed. We present a new result that, surprisingly, allows the expectation and variance of allele age to be computed exactly (within machine precision) for any finite absorbing Markov chain model in a matter of seconds. This approach makes none of the classical assumptions (e.g., weak selection, reversibility, infinite sites), exploits modern sparse linear algebra techniques, integrates over all sample paths, and is rapidly computable for Wright-Fisher populations up to N e  = 100,000. With this approach, we study the joint effect of recurrent mutation, dominance, and selection, and demonstrate new examples of "selective strolls" where the classical symmetry of allele age with respect to selection is violated by weakly selected alleles that are older than neutral alleles at the same frequency. We also show evidence for a strong age imbalance, where rare deleterious alleles are expected to be substantially older than advantageous alleles observed at the same frequency when population-scaled mutation rates are large. These results highlight the under-appreciated utility of computational methods for the direct analysis of Markov chain models in population genetics.

  12. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  13. On the derivation of approximations to cellular automata models and the assumption of independence.

    Science.gov (United States)

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Directory of Open Access Journals (Sweden)

    Giordano James

    2010-01-01

    Full Text Available Abstract A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order, and what counts as abnormality (i.e.- disorder. The distinction(s between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice.

  15. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Science.gov (United States)

    2010-01-01

    A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. PMID:20109176

  16. Pre-equilibrium assumptions and statistical model parameters effects on reaction cross-section calculations

    International Nuclear Information System (INIS)

    Avrigeanu, M.; Avrigeanu, V.

    1992-02-01

    A systematic study on effects of statistical model parameters and semi-classical pre-equilibrium emission models has been carried out for the (n,p) reactions on the 56 Fe and 60 Co target nuclei. The results obtained by using various assumptions within a given pre-equilibrium emission model differ among them more than the ones of different models used under similar conditions. The necessity of using realistic level density formulas is emphasized especially in connection with pre-equilibrium emission models (i.e. with the exciton state density expression), while a basic support could be found only by replacement of the Williams exciton state density formula with a realistic one. (author). 46 refs, 12 figs, 3 tabs

  17. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  18. Limitations to the Dutch cannabis toleration policy: Assumptions underlying the reclassification of cannabis above 15% THC.

    Science.gov (United States)

    Van Laar, Margriet; Van Der Pol, Peggy; Niesink, Raymond

    2016-08-01

    The Netherlands has seen an increase in Δ9-tetrahydrocannabinol (THC) concentrations from approximately 8% in the 1990s up to 20% in 2004. Increased cannabis potency may lead to higher THC-exposure and cannabis related harm. The Dutch government officially condones the sale of cannabis from so called 'coffee shops', and the Opium Act distinguishes cannabis as a Schedule II drug with 'acceptable risk' from other drugs with 'unacceptable risk' (Schedule I). Even in 1976, however, cannabis potency was taken into account by distinguishing hemp oil as a Schedule I drug. In 2011, an advisory committee recommended tightening up legislation, leading to a 2013 bill proposing the reclassification of high potency cannabis products with a THC content of 15% or more as a Schedule I drug. The purpose of this measure was twofold: to reduce public health risks and to reduce illegal cultivation and export of cannabis by increasing punishment. This paper focuses on the public health aspects and describes the (explicit and implicit) assumptions underlying this '15% THC measure', as well as to what extent these are supported by scientific research. Based on scientific literature and other sources of information, we conclude that the 15% measure can provide in theory a slight health benefit for specific groups of cannabis users (i.e., frequent users preferring strong cannabis, purchasing from coffee shops, using 'steady quantities' and not changing their smoking behaviour), but certainly not for all cannabis users. These gains should be weighed against the investment in enforcement and the risk of unintended (adverse) effects. Given the many assumptions and uncertainty about the nature and extent of the expected buying and smoking behaviour changes, the measure is a political choice and based on thin evidence. Copyright © 2016 Springer. Published by Elsevier B.V. All rights reserved.

  19. Assessing moderated mediation in linear models requires fewer confounding assumptions than assessing mediation.

    Science.gov (United States)

    Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2016-11-01

    It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up. © 2016 The British Psychological Society.

  20. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  1. Modelling sexual transmission of HIV: testing the assumptions, validating the predictions

    Science.gov (United States)

    Baggaley, Rebecca F.; Fraser, Christophe

    2010-01-01

    Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600

  2. Tale of Two Courthouses: A Critique of the Underlying Assumptions in Chronic Disease Self-Management for Aboriginal People

    Directory of Open Access Journals (Sweden)

    Isabelle Ellis

    2009-12-01

    Full Text Available This article reviews the assumptions that underpin thecommonly implemented Chronic Disease Self-Managementmodels. Namely that there are a clear set of instructions forpatients to comply with, that all health care providers agreewith; and that the health care provider and the patient agreewith the chronic disease self-management plan that wasdeveloped as part of a consultation. These assumptions areevaluated for their validity in the remote health care context,particularly for Aboriginal people. These assumptions havebeen found to lack validity in this context, therefore analternative model to enhance chronic disease care isproposed.

  3. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    Science.gov (United States)

    Reardon, Sean F.; Raudenbush, Stephen W.

    2013-01-01

    The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…

  4. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    Science.gov (United States)

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  5. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid

  6. Individual Change and the Timing and Onset of Important Life Events: Methods, Models, and Assumptions

    Science.gov (United States)

    Grimm, Kevin; Marcoulides, Katerina

    2016-01-01

    Researchers are often interested in studying how the timing of a specific event affects concurrent and future development. When faced with such research questions there are multiple statistical models to consider and those models are the focus of this paper as well as their theoretical underpinnings and assumptions regarding the nature of the…

  7. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...

  8. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    Science.gov (United States)

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  9. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  10. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  11. The philosophy and assumptions underlying exposure limits for ionising radiation, inorganic lead, asbestos and noise

    International Nuclear Information System (INIS)

    Akber, R.

    1996-01-01

    Full text: A review of the literature relating to exposure to, and exposure limits for, ionising radiation, inorganic lead, asbestos and noise was undertaken. The four hazards were chosen because they were insidious and ubiquitous, were potential hazards in both occupational and environmental settings and had early and late effects depending on dose and dose rate. For all four hazards, the effect of the hazard was enhanced by other exposures such as smoking or organic solvents. In the cases of inorganic lead and noise, there were documented health effects which affected a significant percentage of the exposed populations at or below the [effective] exposure limits. This was not the case for ionising radiation and asbestos. None of the exposure limits considered exposure to multiple mutagens/carcinogens in the calculation of risk. Ionising radiation was the only one of the hazards to have a model of all likely exposures, occupational, environmental and medical, as the basis for the exposure limits. The other three considered occupational exposure in isolation from environmental exposure. Inorganic lead and noise had economic considerations underlying the exposure limits and the exposure limits for asbestos were based on the current limit of detection. All four hazards had many variables associated with exposure, including idiosyncratic factors, that made modelling the risk very complex. The scientific idea of a time weighted average based on an eight hour day, and forty hour week on which the exposure limits for lead, asbestos and noise were based was underpinned by neither empirical evidence or scientific hypothesis. The methodology of the ACGIH in the setting of limits later brought into law, may have been unduly influenced by the industries most closely affected by those limits. Measuring exposure over part of an eight hour day and extrapolating to model exposure over the longer term is not the most effective way to model exposure. The statistical techniques used

  12. Do unreal assumptions pervert behaviour?

    DEFF Research Database (Denmark)

    Petersen, Verner C.

    of the basic assumptions underlying the theories found in economics. Assumptions relating to the primacy of self-interest, to resourceful, evaluative, maximising models of man, to incentive systems and to agency theory. The major part of the paper then discusses how these assumptions and theories may pervert......-interested way nothing will. The purpose of this paper is to take a critical look at some of the assumptions and theories found in economics and discuss their implications for the models and the practices found in the management of business. The expectation is that the unrealistic assumptions of economics have...... become taken for granted and tacitly included into theories and models of management. Guiding business and manage¬ment to behave in a fashion that apparently makes these assumptions become "true". Thus in fact making theories and models become self-fulfilling prophecies. The paper elucidates some...

  13. A critical evaluation of the local-equilibrium assumption in modeling NAPL-pool dissolution

    Science.gov (United States)

    Seagren, Eric A.; Rittmann, Bruce E.; Valocchi, Albert J.

    1999-07-01

    An analytical modeling analysis was used to assess when local equilibrium (LE) and nonequilibrium (NE) modeling approaches may be appropriate for describing nonaqueous-phase liquid (NAPL) pool dissolution. NE mass-transfer between NAPL pools and groundwater is expected to affect the dissolution flux under conditions corresponding to values of Sh'St (the modified Sherwood number ( Lxkl/ Dz) multiplied by the Stanton number ( kl/ vx))≈400, the NE and LE solutions converge, and the LE assumption is appropriate. Based on typical groundwater conditions, many cases of interest are expected to fall in this range. The parameter with the greatest impact on Sh'St is kl. The NAPL pool mass-transfer coefficient correlation of Pfannkuch [Pfannkuch, H.-O., 1984. Determination of the contaminant source strength from mass exchange processes at the petroleum-ground-water interface in shallow aquifer systems. In: Proceedings of the NWWA/API Conference on Petroleum Hydrocarbons and Organic Chemicals in Ground Water—Prevention, Detection, and Restoration, Houston, TX. Natl. Water Well Assoc., Worthington, OH, Nov. 1984, pp. 111-129.] was evaluated using the toluene pool data from Seagren et al. [Seagren, E.A., Rittmann, B.E., Valocchi, A.J., 1998. An experimental investigation of NAPL-pool dissolution enhancement by flushing. J. Contam. Hydrol., accepted.]. Dissolution flux predictions made with kl calculated using the Pfannkuch correlation were similar to the LE model predictions, and deviated systematically from predictions made using the average overall kl=4.76 m/day estimated by Seagren et al. [Seagren, E.A., Rittmann, B.E., Valocchi, A.J., 1998. An experimental investigation of NAPL-pool dissolution enhancement by flushing. J. Contam. Hydrol., accepted.] and from the experimental data for vx>18 m/day. The Pfannkuch correlation kl was too large for vx>≈10 m/day, possibly because of the relatively low Peclet number data used by Pfannkuch [Pfannkuch, H.-O., 1984. Determination

  14. Super learning to hedge against incorrect inference from arbitrary parametric assumptions in marginal structural modeling.

    Science.gov (United States)

    Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J

    2013-08-01

    Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Cloud-turbulence interactions: Sensitivity of a general circulation model to closure assumptions

    International Nuclear Information System (INIS)

    Brinkop, S.; Roeckner, E.

    1993-01-01

    Several approaches to parameterize the turbulent transport of momentum, heat, water vapour and cloud water for use in a general circulation model (GCM) have been tested in one-dimensional and three-dimensional model simulations. The schemes differ with respect to their closure assumptions (conventional eddy diffusivity model versus turbulent kinetic energy closure) and also regarding their treatment of cloud-turbulence interactions. The basis properties of these parameterizations are discussed first in column simulations of a stratocumulus-topped atmospheric boundary layer (ABL) under a strong subsidence inversion during the KONTROL experiment in the North Sea. It is found that the K-models tend to decouple the cloud layer from the adjacent layers because the turbulent activity is calculated from local variables. The higher-order scheme performs better in this respect because internally generated turbulence can be transported up and down through the action of turbulent diffusion. Thus, the TKE-scheme provides not only a better link between the cloud and the sub-cloud layer but also between the cloud and the inversion as a result of cloud-top entrainment. In the stratocumulus case study, where the cloud is confined by a pronounced subsidence inversion, increased entrainment favours cloud dilution through enhanced evaporation of cloud droplets. In the GCM study, however, additional cloud-top entrainment supports cloud formation because indirect cloud generating processes are promoted through efficient ventilation of the ABL, such as the enhanced moisture supply by surface evaporation and the increased depth of the ABL. As a result, tropical convection is more vigorous, the hydrological cycle is intensified, the whole troposphere becomes warmer and moister in general and the cloudiness in the upper part of the ABL is increased. (orig.)

  16. Bootstrapping realized volatility and realized beta under a local Gaussianity assumption

    DEFF Research Database (Denmark)

    Hounyo, Ulrich

    The main contribution of this paper is to propose a new bootstrap method for statistics based on high frequency returns. The new method exploits the local Gaussianity and the local constancy of volatility of high frequency returns, two assumptions that can simplify inference in the high frequency...... context, as recently explained by Mykland and Zhang (2009). Our main contributions are as follows. First, we show that the local Gaussian bootstrap is firstorder consistent when used to estimate the distributions of realized volatility and ealized betas. Second, we show that the local Gaussian bootstrap...... matches accurately the first four cumulants of realized volatility, implying that this method provides third-order refinements. This is in contrast with the wild bootstrap of Gonçalves and Meddahi (2009), which is only second-order correct. Third, we show that the local Gaussian bootstrap is able...

  17. Assessing the skill of hydrology models at simulating the water cycle in the HJ Andrews LTER: Assumptions, strengths and weaknesses

    Science.gov (United States)

    Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...

  18. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus

    Directory of Open Access Journals (Sweden)

    Constantinos Taliotis

    2017-10-01

    Full Text Available The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  19. The social contact hypothesis under the assumption of endemic equilibrium: Elucidating the transmission potential of VZV in Europe

    Directory of Open Access Journals (Sweden)

    E. Santermans

    2015-06-01

    Full Text Available The basic reproduction number R0 and the effective reproduction number R are pivotal parameters in infectious disease epidemiology, quantifying the transmission potential of an infection in a population. We estimate both parameters from 13 pre-vaccination serological data sets on varicella zoster virus (VZV in 12 European countries and from population-based social contact surveys under the commonly made assumptions of endemic and demographic equilibrium. The fit to the serology is evaluated using the inferred effective reproduction number R as a model eligibility criterion combined with AIC as a model selection criterion. For only 2 out of 12 countries, the common choice of a constant proportionality factor is sufficient to provide a good fit to the seroprevalence data. For the other countries, an age-specific proportionality factor provides a better fit, assuming physical contacts lasting longer than 15 min are a good proxy for potential varicella transmission events. In all countries, primary infection with VZV most often occurs in early childhood, but there is substantial variation in transmission potential with R0 ranging from 2.8 in England and Wales to 7.6 in The Netherlands. Two non-parametric methods, the maximal information coefficient (MIC and a random forest approach, are used to explain these differences in R0 in terms of relevant country-specific characteristics. Our results suggest an association with three general factors: inequality in wealth, infant vaccination coverage and child care attendance. This illustrates the need to consider fundamental differences between European countries when formulating and parameterizing infectious disease models.

  20. IRT models with relaxed assumptions in eRm: A manual-like instruction

    Directory of Open Access Journals (Sweden)

    REINHOLD HATZINGER

    2009-03-01

    Full Text Available Linear logistic models with relaxed assumptions (LLRA as introduced by Fischer (1974 are a flexible tool for the measurement of change for dichotomous or polytomous responses. As opposed to the Rasch model, assumptions on dimensionality of items, their mutual dependencies and the distribution of the latent trait in the population of subjects are relaxed. Conditional maximum likelihood estimation allows for inference about treatment, covariate or trend effect parameters without taking the subjects' latent trait values into account. In this paper we will show how LLRAs based on the LLTM, LRSM and LPCM can be used to answer various questions about the measurement of change and how they can be fitted in R using the eRm package. A number of small didactic examples is provided that can easily be used as templates for real data sets. All datafiles used in this paper are available from http://eRm.R-Forge.R-project.org/

  1. Vehicle Modeling for use in the CAFE model: Process description and modeling assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Moawad, Ayman [Argonne National Lab. (ANL), Argonne, IL (United States); Kim, Namdoo [Argonne National Lab. (ANL), Argonne, IL (United States); Rousseau, Aymeric [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-06-01

    The objective of this project is to develop and demonstrate a process that, at a minimum, provides more robust information that can be used to calibrate inputs applicable under the CAFE model’s existing structure. The project will be more fully successful if a process can be developed that minimizes the need for decision trees and replaces the synergy factors by inputs provided directly from a vehicle simulation tool. The report provides a description of the process that was developed by Argonne National Laboratory and implemented in Autonomie.

  2. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  3. Modeling assumptions influence on stress and strain state in 450 t cranes hoisting winch construction

    Directory of Open Access Journals (Sweden)

    Damian GĄSKA

    2011-01-01

    Full Text Available This work investigates the FEM simulation of stress and strain state of the selected trolley’s load-carrying structure with 450 tones hoisting capacity [1]. Computational loads were adopted as in standard PN-EN 13001-2. Model of trolley was built from several cooperating with each other (in contact parts. The influence of model assumptions (simplification in selected construction nodes to the value of maximum stress and strain with its area of occurrence was being analyzed. The aim of this study was to determine whether the simplification, which reduces the time required to prepare the model and perform calculations (e.g., rigid connection instead of contact are substantially changing the characteristics of the model.

  4. NONLINEAR MODELS FOR DESCRIPTION OF CACAO FRUIT GROWTH WITH ASSUMPTION VIOLATIONS

    Directory of Open Access Journals (Sweden)

    JOEL AUGUSTO MUNIZ

    2017-01-01

    Full Text Available Cacao (Theobroma cacao L. is an important fruit in the Brazilian economy, which is mainly cultivated in the southern State of Bahia. The optimal stage for harvesting is a major factor for fruit quality and the knowledge on its growth curves can help, especially in identifying the ideal maturation stage for harvesting. Nonlinear regression models have been widely used for description of growth curves. However, several studies in this subject do not consider the residual analysis, the existence of a possible dependence between longitudinal observations, or the sample variance heterogeneity, compromising the modeling quality. The objective of this work was to compare the fit of nonlinear regression models, considering residual analysis and assumption violations, in the description of the cacao (clone Sial-105 fruit growth. The data evaluated were extracted from Brito and Silva (1983, who conducted the experiment in the Cacao Research Center, Ilheus, State of Bahia. The variables fruit length, diameter and volume as a function of fruit age were studied. The use of weighting and incorporation of residual dependencies was efficient, since the modeling became more consistent, improving the model fit. Considering the first-order autoregressive structure, when needed, leads to significant reduction in the residual standard deviation, making the estimates more reliable. The Logistic model was the most efficient for the description of the cacao fruit growth.

  5. Modelling carbonaceous aerosol from residential solid fuel burning with different assumptions for emissions

    Directory of Open Access Journals (Sweden)

    R. Ots

    2018-04-01

    Full Text Available Evidence is accumulating that emissions of primary particulate matter (PM from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012, as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source. The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist – all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than

  6. Modelling carbonaceous aerosol from residential solid fuel burning with different assumptions for emissions

    Science.gov (United States)

    Ots, Riinu; Heal, Mathew R.; Young, Dominique E.; Williams, Leah R.; Allan, James D.; Nemitz, Eiko; Di Marco, Chiara; Detournay, Anais; Xu, Lu; Ng, Nga L.; Coe, Hugh; Herndon, Scott C.; Mackenzie, Ian A.; Green, David C.; Kuenen, Jeroen J. P.; Reis, Stefan; Vieno, Massimo

    2018-04-01

    Evidence is accumulating that emissions of primary particulate matter (PM) from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal) burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012), as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source). The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA) component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist - all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC) concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than inventory

  7. Spatial modelling of assumption of tourism development with geographic IT using

    Directory of Open Access Journals (Sweden)

    Jitka Machalová

    2010-01-01

    Full Text Available The aim of this article is to show the possibilities of spatial modelling and analysing of assumptions of tourism development in the Czech Republic with the objective to make decision-making processes in tourism easier and more efficient (for companies, clients as well as destination managements. The development and placement of tourism depend on the factors (conditions that influence its application in specific areas. These factors are usually divided into three groups: selective, localization and realization. Tourism is inseparably connected with space – countryside. The countryside can be modelled and consecutively analysed by the means of geographical information technologies. With the help of spatial modelling and following analyses the localization and realization conditions in the regions of the Czech Republic have been evaluated. The best localization conditions have been found in the Liberecký region. The capital city of Prague has negligible natural conditions; however, those social ones are on a high level. Next, the spatial analyses have shown that the best realization conditions are provided by the capital city of Prague. Then the Central-Bohemian, South-Moravian, Moravian-Silesian and Karlovarský regions follow. The development of tourism destination is depended not only on the localization and realization factors but it is basically affected by the level of local destination management. Spatial modelling can help destination managers in decision-making processes in order to optimal use of destination potential and efficient targeting their marketing activities.

  8. The metaphysics of D-CTCs: On the underlying assumptions of Deutsch's quantum solution to the paradoxes of time travel

    Science.gov (United States)

    Dunlap, Lucas

    2016-11-01

    I argue that Deutsch's model for the behavior of systems traveling around closed timelike curves (CTCs) relies implicitly on a substantive metaphysical assumption. Deutsch is employing a version of quantum theory with a significantly supplemented ontology of parallel existent worlds, which differ in kind from the many worlds of the Everett interpretation. Standard Everett does not support the existence of multiple identical copies of the world, which the D-CTC model requires. This has been obscured because he often refers to the branching structure of Everett as a "multiverse", and describes quantum interference by reference to parallel interacting definite worlds. But he admits that this is only an approximation to Everett. The D-CTC model, however, relies crucially on the existence of a multiverse of parallel interacting worlds. Since his model is supplemented by structures that go significantly beyond quantum theory, and play an ineliminable role in its predictions and explanations, it does not represent a quantum solution to the paradoxes of time travel.

  9. Application of random survival forests in understanding the determinants of under-five child mortality in Uganda in the presence of covariates that satisfy the proportional and non-proportional hazards assumption.

    Science.gov (United States)

    Nasejje, Justine B; Mwambi, Henry

    2017-09-07

    Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the

  10. The biosphere at Forsmark. Data, assumptions and models used in the SR-Can assessment

    International Nuclear Information System (INIS)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern

    2006-10-01

    This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. The parameters are topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary, e.g. collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis. The report presents descriptions and estimates not presented elsewhere, as well as summaries of important steps in the biosphere modelling that are presented in more detail in separate reports. The intention is to give the reader a coherent description of the steps taken to calculate doses to biota and humans, including a description of the data used, the rationale for a number of assumptions made during parameterisation, and of how the landscape context is applied in the modelling, and also to present the models used and the results obtained

  11. A narrow-band k-distribution model with single mixture gas assumption for radiative flows

    Science.gov (United States)

    Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon

    2018-06-01

    In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.

  12. The biosphere at Laxemar. Data, assumptions and models used in the SR-Can assessment

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern [eds.

    2006-10-15

    This is essentially a compilation of a variety of reports concerning the site investigations, the research activities and information derived from other sources important for the safety assessment. The main objective is to present prerequisites, methods and data used, in the biosphere modelling for the safety assessment SR-Can at the Laxemar site. A major part of the report focuses on how site-specific data are used, recalculated or modified in order to be applicable in the safety assessment context; and the methods and sub-models that are the basis for the biosphere modelling. Furthermore, the assumptions made as to the future states of surface ecosystems are mainly presented in this report. A similar report is provided for the Forsmark area. This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. A central tool in the work is the description of the topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary in collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis.

  13. The biosphere at Forsmark. Data, assumptions and models used in the SR-Can assessment

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern [eds.

    2006-10-15

    This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. The parameters are topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary, e.g. collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis. The report presents descriptions and estimates not presented elsewhere, as well as summaries of important steps in the biosphere modelling that are presented in more detail in separate reports. The intention is to give the reader a coherent description of the steps taken to calculate doses to biota and humans, including a description of the data used, the rationale for a number of assumptions made during parameterisation, and of how the landscape context is applied in the modelling, and also to present the models used and the results obtained.

  14. The biosphere at Laxemar. Data, assumptions and models used in the SR-Can assessment

    International Nuclear Information System (INIS)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern

    2006-10-01

    This is essentially a compilation of a variety of reports concerning the site investigations, the research activities and information derived from other sources important for the safety assessment. The main objective is to present prerequisites, methods and data used, in the biosphere modelling for the safety assessment SR-Can at the Laxemar site. A major part of the report focuses on how site-specific data are used, recalculated or modified in order to be applicable in the safety assessment context; and the methods and sub-models that are the basis for the biosphere modelling. Furthermore, the assumptions made as to the future states of surface ecosystems are mainly presented in this report. A similar report is provided for the Forsmark area. This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. A central tool in the work is the description of the topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary in collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis

  15. Operating Characteristics of Statistical Methods for Detecting Gene-by-Measured Environment Interaction in the Presence of Gene-Environment Correlation under Violations of Distributional Assumptions.

    Science.gov (United States)

    Van Hulle, Carol A; Rathouz, Paul J

    2015-02-01

    Accurately identifying interactions between genetic vulnerabilities and environmental factors is of critical importance for genetic research on health and behavior. In the previous work of Van Hulle et al. (Behavior Genetics, Vol. 43, 2013, pp. 71-84), we explored the operating characteristics for a set of biometric (e.g., twin) models of Rathouz et al. (Behavior Genetics, Vol. 38, 2008, pp. 301-315), for testing gene-by-measured environment interaction (GxM) in the presence of gene-by-measured environment correlation (rGM) where data followed the assumed distributional structure. Here we explore the effects that violating distributional assumptions have on the operating characteristics of these same models even when structural model assumptions are correct. We simulated N = 2,000 replicates of n = 1,000 twin pairs under a number of conditions. Non-normality was imposed on either the putative moderator or on the ultimate outcome by ordinalizing or censoring the data. We examined the empirical Type I error rates and compared Bayesian information criterion (BIC) values. In general, non-normality in the putative moderator had little impact on the Type I error rates or BIC comparisons. In contrast, non-normality in the outcome was often mistaken for or masked GxM, especially when the outcome data were censored.

  16. Computational studies of global nuclear energy development under the assumption of the world's heterogeneous development

    International Nuclear Information System (INIS)

    Egorov, A.F.; Korobejnikov, V.V.; Poplavskaya, E.V.; Fesenko, G.A.

    2013-01-01

    Authors study the mathematical model of Global nuclear energy development until the end of this century. For comparative scenarios analysis of transition to sustainable nuclear energy systems, the models of heterogeneous world with an allowance for specific national development are under investigation [ru

  17. On the assumption of vanishing temperature fluctuations at the wall for heat transfer modeling

    Science.gov (United States)

    Sommer, T. P.; So, R. M. C.; Zhang, H. S.

    1993-01-01

    Boundary conditions for fluctuating wall temperature are required for near-wall heat transfer modeling. However, their correct specifications for arbitrary thermal boundary conditions are not clear. The conventional approach is to assume zero fluctuating wall temperature or zero gradient for the temperature variance at the wall. These are idealized specifications and the latter condition could lead to an ill posed problem for fully-developed pipe and channel flows. In this paper, the validity and extent of the zero fluctuating wall temperature condition for heat transfer calculations is examined. The approach taken is to assume a Taylor expansion in the wall normal coordinate for the fluctuating temperature that is general enough to account for both zero and non-zero value at the wall. Turbulent conductivity is calculated from the temperature variance and its dissipation rate. Heat transfer calculations assuming both zero and non-zero fluctuating wall temperature reveal that the zero fluctuating wall temperature assumption is in general valid. The effects of non-zero fluctuating wall temperature are limited only to a very small region near the wall.

  18. Assumptions to the model of managing knowledge workers in modern organizations

    Directory of Open Access Journals (Sweden)

    Igielski Michał

    2017-05-01

    Full Text Available Changes in the twenty-first century are faster, suddenly appear, not always desirable for the smooth functioning of the company. This is the domain of globalization, in which new events - opportunities or threats, forcing the company all the time to act. More and more things depend on the intangible assets of the undertaking, its strategic potential. Certain types of work require more knowledge, experience and independent thinking, and custom than others. Therefore in this article the author has taken up the subject of knowledge workers in contemporary organizations. The aim of the study is to attempt to create assumptions about the knowledge management model in these organizations, based on literature analysis and empirical research. In this regard, the author describes the contemporary conditions of employee management and the skills and competences of knowledge workers. In addition, he conducted research (2016 in 100 medium enterprises in the province of Pomerania, using a tool in the form of a questionnaire and an interview. Already at the beginning of the analysis of the data collected, it turned out that for all employers it should be important to discern differences in the creation of a new category of managers who have knowledge useful for the functioning of the company. Moreover, with the experience gained in a similar research process previously carried out in companies from the Baltic Sea Region, the author knew about the positive influence of these people on creating new solutions or improving the quality of already existing products or services.

  19. TESTING THE ASSUMPTIONS AND INTERPRETING THE RESULTS OF THE RASCH MODEL USING LOG-LINEAR PROCEDURES IN SPSS

    NARCIS (Netherlands)

    TENVERGERT, E; GILLESPIE, M; KINGMA, J

    This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total

  20. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  1. Influence of road network and population demand assumptions in evacuation modeling for distant tsunamis

    Science.gov (United States)

    Henry, Kevin; Wood, Nathan J.; Frazier, Tim G.

    2017-01-01

    Tsunami evacuation planning in coastal communities is typically focused on local events where at-risk individuals must move on foot in a matter of minutes to safety. Less attention has been placed on distant tsunamis, where evacuations unfold over several hours, are often dominated by vehicle use and are managed by public safety officials. Traditional traffic simulation models focus on estimating clearance times but often overlook the influence of varying population demand, alternative modes, background traffic, shadow evacuation, and traffic management alternatives. These factors are especially important for island communities with limited egress options to safety. We use the coastal community of Balboa Island, California (USA), as a case study to explore the range of potential clearance times prior to wave arrival for a distant tsunami scenario. We use a first-in–first-out queuing simulation environment to estimate variations in clearance times, given varying assumptions of the evacuating population (demand) and the road network over which they evacuate (supply). Results suggest clearance times are less than wave arrival times for a distant tsunami, except when we assume maximum vehicle usage for residents, employees, and tourists for a weekend scenario. A two-lane bridge to the mainland was the primary traffic bottleneck, thereby minimizing the effect of departure times, shadow evacuations, background traffic, boat-based evacuations, and traffic light timing on overall community clearance time. Reducing vehicular demand generally reduced clearance time, whereas improvements to road capacity had mixed results. Finally, failure to recognize non-residential employee and tourist populations in the vehicle demand substantially underestimated clearance time.

  2. Matrix Diffusion for Performance Assessment - Experimental Evidence, Modelling Assumptions and Open Issues

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, A

    2004-07-01

    In this report a comprehensive overview on the matrix diffusion of solutes in fractured crystalline rocks is presented. Some examples from observations in crystalline bedrock are used to illustrate that matrix diffusion indeed acts on various length scales. Fickian diffusion is discussed in detail followed by some considerations on rock porosity. Due to the fact that the dual-porosity medium model is a very common and versatile method for describing solute transport in fractured porous media, the transport equations and the fundamental assumptions, approximations and simplifications are discussed in detail. There is a variety of geometrical aspects, processes and events which could influence matrix diffusion. The most important of these, such as, e.g., the effect of the flow-wetted fracture surface, channelling and the limited extent of the porous rock for matrix diffusion etc., are addressed. In a further section open issues and unresolved problems related to matrix diffusion are mentioned. Since matrix diffusion is one of the key retarding processes in geosphere transport of dissolved radionuclide species, matrix diffusion was consequently taken into account in past performance assessments of radioactive waste repositories in crystalline host rocks. Some issues regarding matrix diffusion are site-specific while others are independent of the specific situation of a planned repository for radioactive wastes. Eight different performance assessments from Finland, Sweden and Switzerland were considered with the aim of finding out how matrix diffusion was addressed, and whether a consistent picture emerges regarding the varying methodology of the different radioactive waste organisations. In the final section of the report some conclusions are drawn and an outlook is given. An extensive bibliography provides the reader with the key papers and reports related to matrix diffusion. (author)

  3. False assumptions.

    Science.gov (United States)

    Swaminathan, M

    1997-01-01

    Indian women do not have to be told the benefits of breast feeding or "rescued from the clutches of wicked multinational companies" by international agencies. There is no proof that breast feeding has declined in India; in fact, a 1987 survey revealed that 98% of Indian women breast feed. Efforts to promote breast feeding among the middle classes rely on such initiatives as the "baby friendly" hospital where breast feeding is promoted immediately after birth. This ignores the 76% of Indian women who give birth at home. Blaming this unproved decline in breast feeding on multinational companies distracts attention from more far-reaching and intractable effects of social change. While the Infant Milk Substitutes Act is helpful, it also deflects attention from more pressing issues. Another false assumption is that Indian women are abandoning breast feeding to comply with the demands of employment, but research indicates that most women give up employment for breast feeding, despite the economic cost to their families. Women also seek work in the informal sector to secure the flexibility to meet their child care responsibilities. Instead of being concerned about "teaching" women what they already know about the benefits of breast feeding, efforts should be made to remove the constraints women face as a result of their multiple roles and to empower them with the support of families, governmental policies and legislation, employers, health professionals, and the media.

  4. Cement/clay interactions: feedback on the increasing complexity of modeling assumptions

    International Nuclear Information System (INIS)

    Marty, Nicolas C.M.; Gaucher, Eric C.; Tournassat, Christophe; Gaboreau, Stephane; Vong, Chan Quang; Claret, F.; Munier, Isabelle; Cochepin, Benoit

    2012-01-01

    Document available in extended abstract form only. Cementitious materials will be widely used in French concept of radioactive waste repositories. During their degradation over time, in contact with geological pore water, they will release hyper-alkaline fluids rich in calcium and alkaline cations. This chemical gradient likely to develop at the cement/clay interfaces will induce geochemical transformations. The first simplified calculations based mainly on simple mass balance calculation led to a very pessimistic understanding of the real expansion mechanism of the alkaline plume. However, geochemical and migration processes are much more complex because of the dissolution of the barrier's accessory phases and the precipitation of secondary minerals. To describe and to understand this complexity, coupled geochemistry and transport calculations are a useful and a mandatory tool. Furthermore, such sets of modeling when properly calibrated on experimental results are able to give insights on larger time scale unreachable with experiments. Since approximately 20 years, numerous papers have described the results of reactive transport modeling of cement/clay interactions with various numerical assumptions. For example, some authors selected a purely thermodynamic approach while others preferred a coupled thermodynamic/kinetic approach. Unfortunately, most of these studies used different and not comparable parameters as space discretization, initial and boundary conditions, thermodynamic databases, clayey and cementitious materials, etc... This study revisits the types of simulations proposed in the past to represent the effect of an alkaline perturbation with regard to the degree of complexity that was considered. The main goal of the study is to perform simulations with a consistent set of data and an increasing complexity. In doing so, the analysis of numerical results will give a clear vision of key parameters driving the expansion of alteration fronts and

  5. Estimating risks and relative risks in case-base studies under the assumptions of gene-environment independence and Hardy-Weinberg equilibrium.

    Directory of Open Access Journals (Sweden)

    Tina Tsz-Ting Chui

    Full Text Available Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption.

  6. Estimating Risks and Relative Risks in Case-Base Studies under the Assumptions of Gene-Environment Independence and Hardy-Weinberg Equilibrium

    Science.gov (United States)

    Chui, Tina Tsz-Ting; Lee, Wen-Chung

    2014-01-01

    Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption. PMID:25137392

  7. Stability and disease persistence in an age-structured SIS epidemic model with vertical transmission and proportionate mixing assumption

    International Nuclear Information System (INIS)

    El-Doma, M.

    2001-02-01

    The stability of the endemic equilibrium of an SIS age-structured epidemic model of a vertically as well as horizontally transmitted disease is investigated when the force of infection is of proportionate mixing assumption type. We also investigate the uniform weak disease persistence. (author)

  8. Academic Achievement and Behavioral Health among Asian American and African American Adolescents: Testing the Model Minority and Inferior Minority Assumptions

    Science.gov (United States)

    Whaley, Arthur L.; Noel, La Tonya

    2013-01-01

    The present study tested the model minority and inferior minority assumptions by examining the relationship between academic performance and measures of behavioral health in a subsample of 3,008 (22%) participants in a nationally representative, multicultural sample of 13,601 students in the 2001 Youth Risk Behavioral Survey, comparing Asian…

  9. Influence of model assumptions about HIV disease progression after initiating or stopping treatment on estimates of infections and deaths averted by scaling up antiretroviral therapy

    Science.gov (United States)

    Sucharitakul, Kanes; Boily, Marie-Claude; Dimitrov, Dobromir

    2018-01-01

    Background Many mathematical models have investigated the population-level impact of expanding antiretroviral therapy (ART), using different assumptions about HIV disease progression on ART and among ART dropouts. We evaluated the influence of these assumptions on model projections of the number of infections and deaths prevented by expanded ART. Methods A new dynamic model of HIV transmission among men who have sex with men (MSM) was developed, which incorporated each of four alternative assumptions about disease progression used in previous models: (A) ART slows disease progression; (B) ART halts disease progression; (C) ART reverses disease progression by increasing CD4 count; (D) ART reverses disease progression, but disease progresses rapidly once treatment is stopped. The model was independently calibrated to HIV prevalence and ART coverage data from the United States under each progression assumption in turn. New HIV infections and HIV-related deaths averted over 10 years were compared for fixed ART coverage increases. Results Little absolute difference (ART coverage (varied between 33% and 90%) if ART dropouts reinitiated ART at the same rate as ART-naïve MSM. Larger differences in the predicted fraction of HIV-related deaths averted were observed (up to 15pp). However, if ART dropouts could only reinitiate ART at CD4ART interruption did not affect the fraction of HIV infections averted with expanded ART, unless ART dropouts only re-initiated ART at low CD4 counts. Different disease progression assumptions had a larger influence on the fraction of HIV-related deaths averted with expanded ART. PMID:29554136

  10. Simulating residential demand response: Improving socio-technical assumptions in activity-based models of energy demand

    OpenAIRE

    McKenna, E.; Higginson, S.; Grunewald, P.; Darby, S. J.

    2017-01-01

    Demand response is receiving increasing interest as a new form of flexibility within low-carbon power systems. Energy models are an important tool to assess the potential capability of demand side contributions. This paper critically reviews the assumptions in current models and introduces a new conceptual framework to better facilitate such an assessment. We propose three dimensions along which change could occur, namely technology, activities and service expectations. Using this framework, ...

  11. Political Assumptions Underlying Pedagogies of National Education: The Case of Student Teachers Teaching 'British Values' in England

    Science.gov (United States)

    Sant, Edda; Hanley, Chris

    2018-01-01

    Teacher education in England now requires that student teachers follow practices that do not undermine "fundamental British values" where these practices are assessed against a set of ethics and behaviour standards. This paper examines the political assumptions underlying pedagogical interpretations about the education of national…

  12. Technical note: Evaluation of the simultaneous measurements of mesospheric OH, HO2, and O3 under a photochemical equilibrium assumption - a statistical approach

    Science.gov (United States)

    Kulikov, Mikhail Y.; Nechaev, Anton A.; Belikovich, Mikhail V.; Ermakova, Tatiana S.; Feigin, Alexander M.

    2018-05-01

    This Technical Note presents a statistical approach to evaluating simultaneous measurements of several atmospheric components under the assumption of photochemical equilibrium. We consider simultaneous measurements of OH, HO2, and O3 at the altitudes of the mesosphere as a specific example and their daytime photochemical equilibrium as an evaluating relationship. A simplified algebraic equation relating local concentrations of these components in the 50-100 km altitude range has been derived. The parameters of the equation are temperature, neutral density, local zenith angle, and the rates of eight reactions. We have performed a one-year simulation of the mesosphere and lower thermosphere using a 3-D chemical-transport model. The simulation shows that the discrepancy between the calculated evolution of the components and the equilibrium value given by the equation does not exceed 3-4 % in the full range of altitudes independent of season or latitude. We have developed a statistical Bayesian evaluation technique for simultaneous measurements of OH, HO2, and O3 based on the equilibrium equation taking into account the measurement error. The first results of the application of the technique to MLS/Aura data (Microwave Limb Sounder) are presented in this Technical Note. It has been found that the satellite data of the HO2 distribution regularly demonstrate lower altitudes of this component's mesospheric maximum. This has also been confirmed by model HO2 distributions and comparison with offline retrieval of HO2 from the daily zonal means MLS radiance.

  13. Bioclim Deliverable D6b: application of statistical down-scaling within the BIOCLIM hierarchical strategy: methods, data requirements and underlying assumptions

    International Nuclear Information System (INIS)

    2004-01-01

    The overall aim of BIOCLIM is to assess the possible long term impacts due to climate change on the safety of radioactive waste repositories in deep formations. The coarse spatial scale of the Earth-system Models of Intermediate Complexity (EMICs) used in BIOCLIM compared with the BIOCLIM study regions and the needs of performance assessment creates a need for down-scaling. Most of the developmental work on down-scaling methodologies undertaken by the international research community has focused on down-scaling from the general circulation model (GCM) scale (with a typical spatial resolution of 400 km by 400 km over Europe in the current generation of models) using dynamical down-scaling (i.e., regional climate models (RCMs), which typically have a spatial resolution of 50 km by 50 km for models whose domain covers the European region) or statistical methods (which can provide information at the point or station scale) in order to construct scenarios of anthropogenic climate change up to 2100. Dynamical down-scaling (with the MAR RCM) is used in BIOCLIM WP2 to down-scale from the GCM (i.e., IPSL C M4 D ) scale. In the original BIOCLIM description of work, it was proposed that UEA would apply statistical down-scaling to IPSL C M4 D output in WP2 as part of the hierarchical strategy. Statistical down-scaling requires the identification of statistical relationships between the observed large-scale and regional/local climate, which are then applied to large-scale GCM output, on the assumption that these relationships remain valid in the future (the assumption of stationarity). Thus it was proposed that UEA would investigate the extent to which it is possible to apply relationships between the present-day large-scale and regional/local climate to the relatively extreme conditions of the BIOCLIM WP2 snapshot simulations. Potential statistical down-scaling methodologies were identified from previous work performed at UEA. Appropriate station data from the case

  14. THE INSTANTANEOUS SPEED OF ADJUSTMENT ASSUMPTION AND STABILITY OF ECONOMIC-MODELS

    NARCIS (Netherlands)

    SCHOONBEEK, L

    In order to simplify stability analysis of an economic model one can assume that one of the model variables moves infinitely fast towards equilibrium, given the values of the other slower variables. We present conditions such that stability of the simplified model implies, or is implied by,

  15. A note on the translation of conceptual data models into description logics: disjointness and covering assumptions

    CSIR Research Space (South Africa)

    Casini, G

    2012-10-01

    Full Text Available possibilities for conceptual data modeling. It also raises the question of how existing conceptual models using ER, UML or ORM could be translated into Description Logics (DLs), a family of logics that have proved to be particularly appropriate for formalizing...

  16. Nucleon deep-inelastic structure functions in a quark model with factorizability assumptions

    International Nuclear Information System (INIS)

    Linkevich, A.D.; Skachkov, N.B.

    1979-01-01

    Formula for structure functions of deep-inelastic electron scattering on nucleon is derived. For this purpose the dynamic model of factorizing quark amplitudes is used. It has been found that with increase of Q 2 transferred pulse square at great values of x kinemastic variable the decrease of structure function values is observed. At x single values the increase of structure function values is found. The comparison With experimental data shows a good agreement of the model with experiment

  17. Sensitivity of tsunami evacuation modeling to direction and land cover assumptions

    Science.gov (United States)

    Schmidtlein, Mathew C.; Wood, Nathan J.

    2015-01-01

    Although anisotropic least-cost-distance (LCD) modeling is becoming a common tool for estimating pedestrian-evacuation travel times out of tsunami hazard zones, there has been insufficient attention paid to understanding model sensitivity behind the estimates. To support tsunami risk-reduction planning, we explore two aspects of LCD modeling as it applies to pedestrian evacuations and use the coastal community of Seward, Alaska, as our case study. First, we explore the sensitivity of modeling to the direction of movement by comparing standard safety-to-hazard evacuation times to hazard-to-safety evacuation times for a sample of 3985 points in Seward's tsunami-hazard zone. Safety-to-hazard evacuation times slightly overestimated hazard-to-safety evacuation times but the strong relationship to the hazard-to-safety evacuation times, slightly conservative bias, and shorter processing times of the safety-to-hazard approach make it the preferred approach. Second, we explore how variations in land cover speed conservation values (SCVs) influence model performance using a Monte Carlo approach with one thousand sets of land cover SCVs. The LCD model was relatively robust to changes in land cover SCVs with the magnitude of local model sensitivity greatest in areas with higher evacuation times or with wetland or shore land cover types, where model results may slightly underestimate travel times. This study demonstrates that emergency managers should be concerned not only with populations in locations with evacuation times greater than wave arrival times, but also with populations with evacuation times lower than but close to expected wave arrival times, particularly if they are required to cross wetlands or beaches.

  18. Recomputing Causality Assignments on Lumped Process Models When Adding New Simplification Assumptions

    Directory of Open Access Journals (Sweden)

    Antonio Belmonte

    2018-04-01

    Full Text Available This paper presents a new algorithm for the resolution of over-constrained lumped process systems, where partial differential equations of a continuous time and space model of the system are reduced into ordinary differential equations with a finite number of parameters and where the model equations outnumber the unknown model variables. Our proposal is aimed at the study and improvement of the algorithm proposed by Hangos-Szerkenyi-Tuza. This new algorithm improves the computational cost and solves some of the internal problems of the aforementioned algorithm in its original formulation. The proposed algorithm is based on parameter relaxation that can be modified easily. It retains the necessary information of the lumped process system to reduce the time cost after introducing changes during the system formulation. It also allows adjustment of the system formulations that change its differential index between simulations.

  19. Model of the electric energy market in Poland. Assumptions, structure and operation principles

    International Nuclear Information System (INIS)

    Kulagowski, W.

    1994-01-01

    Present state of works on model of electric energy market in Poland with special consideration of bulk energy market is presented. The designed model based on progressive, evolutionary changes is so elastic, that when keeping general structure and fundamentals the particular solutions can be verified or corrected. The changes in the electric energy market are considered as an integral part of existing restructuring process of Polish electric energy sector. The rate of those changes and the mode of their introduction influence on introduction speed of the new solutions. (author). 14 refs, 4 figs

  20. Accurate reduction of a model of circadian rhythms by delayed quasi steady state assumptions

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2014-01-01

    Roč. 139, č. 4 (2014), s. 577-585 ISSN 0862-7959 Grant - others:European Commission(XE) StochDetBioModel(328008) Program:FP7 Institutional support: RVO:67985840 Keywords : biochemical networks * gene regulatory networks * oscillating systems * periodic solution Subject RIV: BA - General Mathematics http://hdl.handle.net/10338.dmlcz/144135

  1. Towards realistic threat modeling : attack commodification, irrelevant vulnerabilities, and unrealistic assumptions

    NARCIS (Netherlands)

    Allodi, L.; Etalle, S.

    2017-01-01

    Current threat models typically consider all possible ways an attacker can penetrate a system and assign probabilities to each path according to some metric (e.g. time-to-compromise). In this paper we discuss how this view hinders the realness of both technical (e.g. attack graphs) and strategic

  2. White Noise Assumptions Revisited : Regression Models and Statistical Designs for Simulation Practice

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2006-01-01

    Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise.By definition, white noise is normally, independently, and identically distributed with zero mean.This survey tries to answer the following questions: (i) How realistic are these

  3. Multiverse Assumptions and Philosophy

    Directory of Open Access Journals (Sweden)

    James R. Johnson

    2018-02-01

    Full Text Available Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong topics such as: infinity, duplicate yous, hypothetical fields, more than three space dimensions, Hilbert space, advanced civilizations, and reality established by mathematical relationships. It is easy to confuse multiverse proposals because many divergent models exist. This overview defines the characteristics of eleven popular multiverse proposals. The characteristics compared are: initial conditions, values of constants, laws of nature, number of space dimensions, number of universes, and fine tuning explanations. Future scientific experiments may validate selected assumptions; but until they do, proposals by philosophers may be as valid as theoretical scientific theories.

  4. Modeling soil CO2 production and transport with dynamic source and diffusion terms: testing the steady-state assumption using DETECT v1.0

    Science.gov (United States)

    Ryan, Edmund M.; Ogle, Kiona; Kropp, Heather; Samuels-Crow, Kimberly E.; Carrillo, Yolima; Pendall, Elise

    2018-05-01

    The flux of CO2 from the soil to the atmosphere (soil respiration, Rsoil) is a major component of the global carbon (C) cycle. Methods to measure and model Rsoil, or partition it into different components, often rely on the assumption that soil CO2 concentrations and fluxes are in steady state, implying that Rsoil is equal to the rate at which CO2 is produced by soil microbial and root respiration. Recent research, however, questions the validity of this assumption. Thus, the aim of this work was two-fold: (1) to describe a non-steady state (NSS) soil CO2 transport and production model, DETECT, and (2) to use this model to evaluate the environmental conditions under which Rsoil and CO2 production are likely in NSS. The backbone of DETECT is a non-homogeneous, partial differential equation (PDE) that describes production and transport of soil CO2, which we solve numerically at fine spatial and temporal resolution (e.g., 0.01 m increments down to 1 m, every 6 h). Production of soil CO2 is simulated for every depth and time increment as the sum of root respiration and microbial decomposition of soil organic matter. Both of these factors can be driven by current and antecedent soil water content and temperature, which can also vary by time and depth. We also analytically solved the ordinary differential equation (ODE) corresponding to the steady-state (SS) solution to the PDE model. We applied the DETECT NSS and SS models to the six-month growing season period representative of a native grassland in Wyoming. Simulation experiments were conducted with both model versions to evaluate factors that could affect departure from SS, such as (1) varying soil texture; (2) shifting the timing or frequency of precipitation; and (3) with and without the environmental antecedent drivers. For a coarse-textured soil, Rsoil from the SS model closely matched that of the NSS model. However, in a fine-textured (clay) soil, growing season Rsoil was ˜ 3 % higher under the assumption of

  5. The reality behind the assumptions: Modelling and simulation support for the SAAF

    CSIR Research Space (South Africa)

    Naidoo, K

    2015-10-01

    Full Text Available : Modelling and simulation support for the SAAF Kavendra Naidoo Military Aerospace Trends & Strategy Military aerospace trends • National security includes other dimensions: social, economic development, environmental, energy security, etc.... • Military budgets constrained • Changing nature of the threat, asymmetric, non-conventional, innovative, etc. • Proliferation and availability of technology, information, skills and experience • Defence Review: official strategy to respond to global...

  6. Oceanographic and behavioural assumptions in models of the fate of coral and coral reef fish larvae.

    Science.gov (United States)

    Wolanski, Eric; Kingsford, Michael J

    2014-09-06

    A predictive model of the fate of coral reef fish larvae in a reef system is proposed that combines the oceanographic processes of advection and turbulent diffusion with the biological process of horizontal swimming controlled by olfactory and auditory cues within the timescales of larval development. In the model, auditory cues resulted in swimming towards the reefs when within hearing distance of the reef, whereas olfactory cues resulted in the larvae swimming towards the natal reef in open waters by swimming against the concentration gradients in the smell plume emanating from the natal reef. The model suggested that the self-seeding rate may be quite large, at least 20% for the larvae of rapidly developing reef fish species, which contrasted with a self-seeding rate less than 2% for non-swimming coral larvae. The predicted self-recruitment rate of reefs was sensitive to a number of parameters, such as the time at which the fish larvae reach post-flexion, the pelagic larval duration of the larvae, the horizontal turbulent diffusion coefficient in reefal waters and the horizontal swimming behaviour of the fish larvae in response to auditory and olfactory cues, for which better field data are needed. Thus, the model suggested that high self-seeding rates for reef fish are possible, even in areas where the 'sticky water' effect is minimal and in the absence of long-term trapping in oceanic fronts and/or large-scale oceanic eddies or filaments that are often argued to facilitate the return of the larvae after long periods of drifting at sea. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. The reality behind the assumptions: Modelling and simulation support for the SAAF

    CSIR Research Space (South Africa)

    Naidoo, K

    2015-10-01

    Full Text Available the required depth of know-how to support and upgrade technologies. 52. Having a strong DSET will further allow the Defence Force to leverage the capabilities of the national SET spectrum to meet future defence demands. Modelling.... • Military budgets constrained • Changing nature of the threat, asymmetric, non-conventional, innovative, etc. • Proliferation and availability of technology, information, skills and experience • Defence Review: official strategy to respond to global...

  8. The role of the spectator assumption in models for projectile fragmentation

    International Nuclear Information System (INIS)

    Mc Voy, K.W.

    1984-01-01

    This review is restricted to direct-reaction models for the production of projectile fragments in nuclear collisions, at beam energies of 10 or more MeV/nucleon. Projectile fragments are normally identified as those which have near-beam velocities, and there seem to be two principal mechanisms for the production of these fast particles: 1. Direct breakup, 2. Sequential breakup. Of the two, the authors exclude from their discussion the ''sequential breakup'' process, in which the projectile is excited by the initial collision (either via inelastic scattering or transfer to unbound states) and then subsequently decays, outside the range of interaction

  9. Fluid dynamics of air in a packed bed: velocity profiles and the continuum model assumption

    Directory of Open Access Journals (Sweden)

    NEGRINI A. L.

    1999-01-01

    Full Text Available Air flow through packed beds was analyzed experimentally under conditions ranging from those that reinforce the effect of the wall on the void fraction to those that minimize it. The packing was spherical particles, with a tube-to-particle diameter ratio (D/dp between 3 and 60. Air flow rates were maintained between 1.3 and 4.44 m3/min, and gas velocity was measured with a Pitot tube positioned above the bed exit. Measurements were made at various radial and angular coordinate values, allowing the distribution of air flow across the bed to be described in detail. Comparison of the experimentally observed radial profiles with those derived from published equations revealed that at high D/dp ratios the measured and calculated velocity profiles behaved similarly. At low ratios, oscillations in the velocity profiles agreed with those in the voidage profiles, signifying that treating the porous medium as a continuum medium is questionable in these cases.

  10. In modelling effects of global warming, invalid assumptions lead to unrealistic projections.

    Science.gov (United States)

    Lefevre, Sjannie; McKenzie, David J; Nilsson, Göran E

    2018-02-01

    In their recent Opinion, Pauly and Cheung () provide new projections of future maximum fish weight (W ∞ ). Based on criticism by Lefevre et al. (2017) they changed the scaling exponent for anabolism, d G . Here we find that changing both d G and the scaling exponent for catabolism, b, leads to the projection that fish may even become 98% smaller with a 1°C increase in temperature. This unrealistic outcome indicates that the current W ∞ is unlikely to be explained by the Gill-Oxygen Limitation Theory (GOLT) and, therefore, GOLT cannot be used as a mechanistic basis for model projections about fish size in a warmer world. © 2017 John Wiley & Sons Ltd.

  11. Flawed Assumptions, Models and Decision Making: Misconceptions Concerning Human Elements in Complex System

    International Nuclear Information System (INIS)

    FORSYTHE, JAMES C.; WENNER, CAREN A.

    1999-01-01

    The history of high consequence accidents is rich with events wherein the actions, or inaction, of humans was critical to the sequence of events preceding the accident. Moreover, it has been reported that human error may contribute to 80% of accidents, if not more (dougherty and Fragola, 1988). Within the safety community, this reality is widely recognized and there is a substantially greater awareness of the human contribution to system safety today than has ever existed in the past. Despite these facts, and some measurable reduction in accident rates, when accidents do occur, there is a common lament. No matter how hard we try, we continue to have accidents. Accompanying this lament, there is often bewilderment expressed in statements such as, ''There's no explanation for why he/she did what they did''. It is believed that these statements are a symptom of inadequacies in how they think about humans and their role within technological systems. In particular, while there has never been a greater awareness of human factors, conceptual models of human involvement in engineered systems are often incomplete and in some cases, inaccurate

  12. On the Empirical Importance of the Conditional Skewness Assumption in Modelling the Relationship between Risk and Return

    Science.gov (United States)

    Pipień, M.

    2008-09-01

    We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.

  13. Reconstructing genealogies of serial samples under the assumption of a molecular clock using serial-sample UPGMA.

    Science.gov (United States)

    Drummond, A; Rodrigo, A G

    2000-12-01

    Reconstruction of evolutionary relationships from noncontemporaneous molecular samples provides a new challenge for phylogenetic reconstruction methods. With recent biotechnological advances there has been an increase in molecular sequencing throughput, and the potential to obtain serial samples of sequences from populations, including rapidly evolving pathogens, is fast being realized. A new method called the serial-sample unweighted pair grouping method with arithmetic means (sUPGMA) is presented that reconstructs a genealogy or phylogeny of sequences sampled serially in time using a matrix of pairwise distances. The resulting tree depicts the terminal lineages of each sample ending at a different level consistent with the sample's temporal order. Since sUPGMA is a variant of UPGMA, it will perform best when sequences have evolved at a constant rate (i.e., according to a molecular clock). On simulated data, this new method performs better than standard cluster analysis under a variety of longitudinal sampling strategies. Serial-sample UPGMA is particularly useful for analysis of longitudinal samples of viruses and bacteria, as well as ancient DNA samples, with the minimal requirement that samples of sequences be ordered in time.

  14. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  15. Simulation modeling for stratified breast cancer screening - a systematic review of cost and quality of life assumptions.

    Science.gov (United States)

    Arnold, Matthias

    2017-12-02

    The economic evaluation of stratified breast cancer screening gains momentum, but produces also very diverse results. Systematic reviews so far focused on modeling techniques and epidemiologic assumptions. However, cost and utility parameters received only little attention. This systematic review assesses simulation models for stratified breast cancer screening based on their cost and utility parameters in each phase of breast cancer screening and care. A literature review was conducted to compare economic evaluations with simulation models of personalized breast cancer screening. Study quality was assessed using reporting guidelines. Cost and utility inputs were extracted, standardized and structured using a care delivery framework. Studies were then clustered according to their study aim and parameters were compared within the clusters. Eighteen studies were identified within three study clusters. Reporting quality was very diverse in all three clusters. Only two studies in cluster 1, four studies in cluster 2 and one study in cluster 3 scored high in the quality appraisal. In addition to the quality appraisal, this review assessed if the simulation models were consistent in integrating all relevant phases of care, if utility parameters were consistent and methodological sound and if cost were compatible and consistent in the actual parameters used for screening, diagnostic work up and treatment. Of 18 studies, only three studies did not show signs of potential bias. This systematic review shows that a closer look into the cost and utility parameter can help to identify potential bias. Future simulation models should focus on integrating all relevant phases of care, using methodologically sound utility parameters and avoiding inconsistent cost parameters.

  16. Relevance of collisionality in the transport model assumptions for divertor detachment multi-fluid modelling on JET

    DEFF Research Database (Denmark)

    Wiesen, S.; Fundamenski, W.; Wischmeier, M.

    2011-01-01

    of the new transport model: a smoothly decaying target recycling flux roll over, an asymmetric drop of temperature and pressure along the field lines as well as macroscopic power dependent plasma oscillations near the density limit which had been previously observed also experimentally. The latter effect...

  17. Partitioning uncertainty in streamflow projections under nonstationary model conditions

    Science.gov (United States)

    Chawla, Ila; Mujumdar, P. P.

    2018-02-01

    Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them

  18. Adult Learning Assumptions

    Science.gov (United States)

    Baskas, Richard S.

    2011-01-01

    The purpose of this study is to examine Knowles' theory of andragogy and his six assumptions of how adults learn while providing evidence to support two of his assumptions based on the theory of andragogy. As no single theory explains how adults learn, it can best be assumed that adults learn through the accumulation of formal and informal…

  19. Modeling of porous concrete elements under load

    Directory of Open Access Journals (Sweden)

    Demchyna B.H.

    2017-12-01

    Full Text Available It is known that cell concretes are almost immediately destroyed under load, having reached certain critical stresses. Such kind of destruction is called a “catastrophic failure”. Process of crack formation is one of the main factors, influencing process of concrete destruction. Modern theory of crack formation is mainly based on the Griffith theory of destruction. However, the mentioned theory does not completely correspond to the structure of cell concrete with its cell structure, because the theory is intended for a solid body. The article presents one of the possible variants of modelling of the structure of cell concrete and gives some assumptions concerning the process of crack formation in such hollow, not solid environment.

  20. Modeling of porous concrete elements under load

    Science.gov (United States)

    Demchyna, B. H.; Famuliak, Yu. Ye.; Demchyna, Kh. B.

    2017-12-01

    It is known that cell concretes are almost immediately destroyed under load, having reached certain critical stresses. Such kind of destruction is called a "catastrophic failure". Process of crack formation is one of the main factors, influencing process of concrete destruction. Modern theory of crack formation is mainly based on the Griffith theory of destruction. However, the mentioned theory does not completely correspond to the structure of cell concrete with its cell structure, because the theory is intended for a solid body. The article presents one of the possible variants of modelling of the structure of cell concrete and gives some assumptions concerning the process of crack formation in such hollow, not solid environment.

  1. Hawaiian forest bird trends: using log-linear models to assess long-term trends is supported by model diagnostics and assumptions (reply to Freed and Cann 2013)

    Science.gov (United States)

    Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.

    2014-01-01

    Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.

  2. Chemical model reduction under uncertainty

    KAUST Repository

    Najm, Habib; Galassi, R. Malpica; Valorani, M.

    2016-01-01

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  3. Chemical model reduction under uncertainty

    KAUST Repository

    Najm, Habib

    2016-01-05

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  4. Modelling microstructural evolution under irradiation

    International Nuclear Information System (INIS)

    Tikare, V.

    2015-01-01

    Microstructural evolution of materials under irradiation is characterised by some unique features that are not typically present in other application environments. While much understanding has been achieved by experimental studies, the ability to model this microstructural evolution for complex materials states and environmental conditions not only enhances understanding, it also enables prediction of materials behaviour under conditions that are difficult to duplicate experimentally. Furthermore, reliable models enable designing materials for improved engineering performance for their respective applications. Thus, development and application of mesoscale microstructural model are important for advancing nuclear materials technologies. In this chapter, the application of the Potts model to nuclear materials will be reviewed and demonstrated, as an example of microstructural evolution processes. (author)

  5. How Does Temperature Impact Leaf Size and Shape in Four Woody Dicot Species? Testing the Assumptions of Leaf Physiognomy-Climate Models

    Science.gov (United States)

    McKee, M.; Royer, D. L.

    2017-12-01

    The physiognomy (size and shape) of fossilized leaves has been used to reconstruct the mean annual temperature of ancient environments. Colder temperatures often select for larger and more abundant leaf teeth—serrated edges on leaf margins—as well as a greater degree of leaf dissection. However, to be able to accurately predict paleotemperature from the morphology of fossilized leaves, leaves must be able to react quickly and in a predictable manner to changes in temperature. We examined the extent to which temperature affects leaf morphology in four tree species: Carpinus caroliniana, Acer negundo, Ilex opaca, and Ostrya virginiana. Saplings of these species were grown in two growth cabinets under contrasting temperatures (17 and 25 °C). Compared to the cool treatment, in the warm treatment Carpinus caroliniana leaves had significantly fewer leaf teeth and a lower ratio of total number of leaf teeth to internal perimeter; and Acer negundo leaves had a significantly lower feret diameter ratio (a measure of leaf dissection). In addition, a two-way ANOVA tested the influence of temperature and species on leaf physiognomy. This analysis revealed that all plants, regardless of species, tended to develop more highly dissected leaves with more leaf teeth in the cool treatment. Because the cabinets maintained equivalent moisture, humidity, and CO2 concentration between the two treatments, these results demonstrate that these species could rapidly adapt to changes in temperature. However, not all of the species reacted identically to temperature changes. For example, Acer negundo, Carpinus caroliniana, and Ostrya virginiana all had a higher number of total teeth in the cool treatment compared to the warm treatment, but the opposite was true for Ilex opaca. Our work questions a fundamental assumption common to all models predicting paleotemperature from the physiognomy of fossilized leaves: a given climate will inevitably select for the same leaf physiognomy

  6. Early Validation of Automation Plant Control Software using Simulation Based on Assumption Modeling and Validation Use Cases

    Directory of Open Access Journals (Sweden)

    Veronika Brandstetter

    2015-10-01

    Full Text Available In automation plants, technical processes must be conducted in a way that products, substances, or services are produced reliably, with sufficient quality and with minimal strain on resources. A key driver in conducting these processes is the automation plant’s control software, which controls the technical plant components and thereby affects the physical, chemical, and mechanical processes that take place in automation plants. To this end, the control software of an automation plant must adhere to strict process requirements arising from the technical processes, and from the physical plant design. Currently, the validation of the control software often starts late in the engineering process in many cases – once the automation plant is almost completely constructed. However, as widely acknowledged, the later the control software of the automation plant is validated, the higher the effort for correcting revealed defects is, which can lead to serious budget overruns and project delays. In this article we propose an approach that allows the early validation of automation control software against the technical plant processes and assumptions about the physical plant design by means of simulation. We demonstrate the application of our approach on the example of an actual plant project from the automation industry and present it’s technical implementation

  7. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  8. Negative emotions in art reception: Refining theoretical assumptions and adding variables to the Distancing-Embracing model.

    Science.gov (United States)

    Menninghaus, Winfried; Wagner, Valentin; Hanich, Julian; Wassiliwizky, Eugen; Jacobsen, Thomas; Koelsch, Stefan

    2017-01-01

    While covering all commentaries, our response specifically focuses on the following issues: How can the hypothesis of emotional distancing (qua art framing) be compatible with stipulating high levels of felt negative emotions in art reception? Which concept of altogether pleasurable mixed emotions does our model involve? Can mechanisms of predictive coding, social sharing, and immersion enhance the power of our model?

  9. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  10. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    Science.gov (United States)

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  11. Haldane model under nonuniform strain

    Science.gov (United States)

    Ho, Yen-Hung; Castro, Eduardo V.; Cazalilla, Miguel A.

    2017-10-01

    We study the Haldane model under strain using a tight-binding approach, and compare the obtained results with the continuum-limit approximation. As in graphene, nonuniform strain leads to a time-reversal preserving pseudomagnetic field that induces (pseudo-)Landau levels. Unlike a real magnetic field, strain lifts the degeneracy of the zeroth pseudo-Landau levels at different valleys. Moreover, for the zigzag edge under uniaxial strain, strain removes the degeneracy within the pseudo-Landau levels by inducing a tilt in their energy dispersion. The latter arises from next-to-leading order corrections to the continuum-limit Hamiltonian, which are absent for a real magnetic field. We show that, for the lowest pseudo-Landau levels in the Haldane model, the dominant contribution to the tilt is different from graphene. In addition, although strain does not strongly modify the dispersion of the edge states, their interplay with the pseudo-Landau levels is different for the armchair and zigzag ribbons. Finally, we study the effect of strain in the band structure of the Haldane model at the critical point of the topological transition, thus shedding light on the interplay between nontrivial topology and strain in quantum anomalous Hall systems.

  12. A Violation of the Conditional Independence Assumption in the Two-High-Threshold Model of Recognition Memory

    Science.gov (United States)

    Chen, Tina; Starns, Jeffrey J.; Rotello, Caren M.

    2015-01-01

    The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are…

  13. ASSESSING GOING CONCERN ASSUMPTION BY USING RATING VALUATION MODELS BASED UPON ANALYTICAL PROCEDURES IN CASE OF FINANCIAL INVESTMENT COMPANIES

    OpenAIRE

    Tatiana Danescu; Ovidiu Spatacean; Paula Nistor; Andrea Cristina Danescu

    2010-01-01

    Designing and performing analytical procedures aimed to assess the rating of theFinancial Investment Companies are essential activities both in the phase of planning a financialaudit mission and in the phase of issuing conclusions regarding the suitability of using by themanagement and other persons responsible for governance of going concern, as the basis forpreparation and disclosure of financial statements. The paper aims to examine the usefulness ofrecognized models used in the practice o...

  14. Simulating star clusters with the AMUSE software framework. I. Dependence of cluster lifetimes on model assumptions and cluster dissolution modes

    International Nuclear Information System (INIS)

    Whitehead, Alfred J.; McMillan, Stephen L. W.; Vesperini, Enrico; Portegies Zwart, Simon

    2013-01-01

    We perform a series of simulations of evolving star clusters using the Astrophysical Multipurpose Software Environment (AMUSE), a new community-based multi-physics simulation package, and compare our results to existing work. These simulations model a star cluster beginning with a King model distribution and a selection of power-law initial mass functions and contain a tidal cutoff. They are evolved using collisional stellar dynamics and include mass loss due to stellar evolution. After studying and understanding that the differences between AMUSE results and results from previous studies are understood, we explored the variation in cluster lifetimes due to the random realization noise introduced by transforming a King model to specific initial conditions. This random realization noise can affect the lifetime of a simulated star cluster by up to 30%. Two modes of star cluster dissolution were identified: a mass evolution curve that contains a runaway cluster dissolution with a sudden loss of mass, and a dissolution mode that does not contain this feature. We refer to these dissolution modes as 'dynamical' and 'relaxation' dominated, respectively. For Salpeter-like initial mass functions, we determined the boundary between these two modes in terms of the dynamical and relaxation timescales.

  15. The relevance of ''theory rich'' bridge assumptions

    NARCIS (Netherlands)

    Lindenberg, S

    1996-01-01

    Actor models are increasingly being used as a form of theory building in sociology because they can better represent the caul mechanisms that connect macro variables. However, actor models need additional assumptions, especially so-called bridge assumptions, for filling in the relatively empty

  16. Development of a tool dedicated to the evaluation of hydrogen term source for technological Wastes: assumptions, physical models, and validation

    Energy Technology Data Exchange (ETDEWEB)

    Lamouroux, C. [CEA Saclay, Nuclear Energy Division /DANS, Department of physico-chemistry, 91191 Gif sur yvette (France); Esnouf, S. [CEA Saclay, DSM/IRAMIS/SIS2M/Radiolysis Laboratory , 91191 Gif sur yvette (France); Cochin, F. [Areva NC,recycling BU, DIRP/RDP tour Areva, 92084 Paris La Defense (France)

    2013-07-01

    In radioactive waste packages hydrogen is generated, in one hand, from the radiolysis of wastes (mainly organic materials) and, in the other hand, from the radiolysis of water content in the cement matrix. In order to assess hydrogen generation 2 tools based on operational models have been developed. One is dedicated to the determination of the hydrogen source term issues from the radiolysis of the wastes: the STORAGE tool (Simulation Tool Of Emission Radiolysis Gas), the other deals with the hydrogen source term gas, produced by radiolysis of the cement matrices (the Damar tool). The approach used by the STORAGE tool for assessing the production rate of radiolysis gases is divided into five steps: 1) Specification of the data packages, in particular, inventories and radiological materials defined for a package medium; 2) Determination of radiochemical yields for the different constituents and the laws of behavior associated, this determination of radiochemical yields is made from the PRELOG database in which radiochemical yields in different irradiation conditions have been compiled; 3) Definition of hypothesis concerning the composition and the distribution of contamination inside the package to allow assessment of the power absorbed by the constituents; 4) Sum-up of all the contributions; And finally, 5) validation calculations by comparison with a reduced sampling of packages. Comparisons with measured values confirm the conservative character of the methodology and give confidence in the safety margins for safety analysis report.

  17. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  18. Generic distortion model for metrology under optical microscopes

    Science.gov (United States)

    Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng

    2018-04-01

    For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.

  19. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  1. Chemical model reduction under uncertainty

    KAUST Repository

    Malpica Galassi, Riccardo; Valorani, Mauro; Najm, Habib N.; Safta, Cosmin; Khalil, Mohammad; Ciottoli, Pietro P.

    2017-01-01

    A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis

  2. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  3. Chemical model reduction under uncertainty

    KAUST Repository

    Malpica Galassi, Riccardo

    2017-03-06

    A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.

  4. On Assumptions in Development of a Mathematical Model of Thermo-gravitational Convection in the Large Volume Process Tanks Taking into Account Fermentation

    Directory of Open Access Journals (Sweden)

    P. M. Shkapov

    2015-01-01

    Full Text Available The paper provides a mathematical model of thermo-gravity convection in a large volume vertical cylinder. The heat is removed from the product via the cooling jacket at the top of the cylinder. We suppose that a laminar fluid motion takes place. The model is based on the NavierStokes equation, the equation of heat transfer through the wall, and the heat transfer equation. The peculiarity of the process in large volume tanks was the distribution of the physical parameters of the coordinates that was taken into account when constructing the model. The model corresponds to a process of wort beer fermentation in the cylindrical-conical tanks (CCT. The CCT volume is divided into three zones and for each zone model equations was obtained. The first zone has an annular cross-section and it is limited to the height by the cooling jacket. In this zone the heat flow from the cooling jacket to the product is uppermost. Model equation of the first zone describes the process of heat transfer through the wall and is presented by linear inhomogeneous differential equation in partial derivatives that is solved analytically. For the second and third zones description there was a number of engineering assumptions. The fluid was considered Newtonian, viscous and incompressible. Convective motion considered in the Boussinesq approximation. The effect of viscous dissipation is not considered. The topology of fluid motion is similar to the cylindrical Poiseuille. The second zone model consists of the Navier-Stokes equations in cylindrical coordinates with the introduction of a simplified and the heat equation in the liquid layer. The volume that is occupied by an upward convective flow pertains to the third area. Convective flows do not mix and do not exchange heat. At the start of the process a medium has the same temperature and a zero initial velocity in the whole volume that allows us to specify the initial conditions for the process. The paper shows the

  5. A test of the critical assumption of the sensory bias model for the evolution of female mating preference using neural networks.

    Science.gov (United States)

    Fuller, Rebecca C

    2009-07-01

    The sensory bias model for the evolution of mating preferences states that mating preferences evolve as correlated responses to selection on nonmating behaviors sharing a common sensory system. The critical assumption is that pleiotropy creates genetic correlations that affect the response to selection. I simulated selection on populations of neural networks to test this. First, I selected for various combinations of foraging and mating preferences. Sensory bias predicts that populations with preferences for like-colored objects (red food and red mates) should evolve more readily than preferences for differently colored objects (red food and blue mates). Here, I found no evidence for sensory bias. The responses to selection on foraging and mating preferences were independent of one another. Second, I selected on foraging preferences alone and asked whether there were correlated responses for increased mating preferences for like-colored mates. Here, I found modest evidence for sensory bias. Selection for a particular foraging preference resulted in increased mating preference for similarly colored mates. However, the correlated responses were small and inconsistent. Selection on foraging preferences alone may affect initial levels of mating preferences, but these correlations did not constrain the joint evolution of foraging and mating preferences in these simulations.

  6. The ruin probability of a discrete time risk model under constant interest rate with heavy tails

    NARCIS (Netherlands)

    Tang, Q.

    2004-01-01

    This paper investigates the ultimate ruin probability of a discrete time risk model with a positive constant interest rate. Under the assumption that the gross loss of the company within one year is subexponentially distributed, a simple asymptotic relation for the ruin probability is derived and

  7. Assumptions for the Annual Energy Outlook 1993

    International Nuclear Information System (INIS)

    1993-01-01

    This report is an auxiliary document to the Annual Energy Outlook 1993 (AEO) (DOE/EIA-0383(93)). It presents a detailed discussion of the assumptions underlying the forecasts in the AEO. The energy modeling system is an economic equilibrium system, with component demand modules representing end-use energy consumption by major end-use sector. Another set of modules represents petroleum, natural gas, coal, and electricity supply patterns and pricing. A separate module generates annual forecasts of important macroeconomic and industrial output variables. Interactions among these components of energy markets generate projections of prices and quantities for which energy supply equals energy demand. This equilibrium modeling system is referred to as the Intermediate Future Forecasting System (IFFS). The supply models in IFFS for oil, coal, natural gas, and electricity determine supply and price for each fuel depending upon consumption levels, while the demand models determine consumption depending upon end-use price. IFFS solves for market equilibrium for each fuel by balancing supply and demand to produce an energy balance in each forecast year

  8. Extracurricular Business Planning Competitions: Challenging the Assumptions

    Science.gov (United States)

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  9. RADIONUCLIDE TRANSPORT MODELS UNDER AMBIENT CONDITIONS

    Energy Technology Data Exchange (ETDEWEB)

    S. Magnuson

    2004-11-01

    The purpose of this model report is to document the unsaturated zone (UZ) radionuclide transport model, which evaluates, by means of three-dimensional numerical models, the transport of radioactive solutes and colloids in the UZ, under ambient conditions, from the repository horizon to the water table at Yucca Mountain, Nevada.

  10. A model for phase stability under irradiation

    International Nuclear Information System (INIS)

    Abromeit, C.

    The combination of two theoretical models leads to modified criteria of stability of precipitates under heavy particle irradiation. The size of existing or under irradiation newly formed precipitates is limited by a stable radius. Precipitate surface energy effects are included in a consistent manner

  11. Modeling of the radiation regime and photosynthesis of a finite canopy using the DART model. Influence of canopy architecture assumptions and border effects

    International Nuclear Information System (INIS)

    Demarez, V.; Gastellu-Etchegorry, J.P.; Mordelet, P.; Tosca, C.; Marty, G.; Guillevic, P.

    2000-01-01

    The scope of this work was to investigate the impact of the border effects and the 3-D architecture of a fallow field on: 1) its bidirectional reflectance factor (BRF); 2) its PAR (photosynthetically active radiation) regime; and, to a lesser extent, 3) on its carbon assimilation. For this purpose, laboratory BRF measurements were conducted on a sample of a fallow field. Moreover, we modified a 3-D radiative transfer model in order to simulate the visible and near infrared BRF of finite and heterogeneous media. Several scene representations were used (finite and infinite scenes with/without 1-D or 3-D distribution of leaf area index [LAI]). Results showed that border effects and LAI distribution strongly affect the BRF, with variations as large as 40% depending on the scene representations and on the spectral domain. PAR profiles and instantaneous canopy carbon assimilation of an infinite scene (natural conditions) were also studied with the 3-D model. The results stressed that, in the case of a fallow field, the use of a simple LAI profile provides enough information to accurately simulate the effects of the architecture on the PAR regime and the carbon assimilation of a fallow field. (author) [fr

  12. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  13. Behavioural assumptions in labour economics: Analysing social security reforms and labour market transitions

    OpenAIRE

    van Huizen, T.M.

    2012-01-01

    The aim of this dissertation is to test behavioural assumptions in labour economics models and thereby improve our understanding of labour market behaviour. The assumptions under scrutiny in this study are derived from an analysis of recent influential policy proposals: the introduction of savings schemes in the system of social security. A central question is how this reform will affect labour market incentives and behaviour. Part I (Chapter 2 and 3) evaluates savings schemes. Chapter 2 exam...

  14. Radionuclide Transport Models Under Ambient Conditions

    International Nuclear Information System (INIS)

    Moridis, G.; Hu, Q.

    2001-01-01

    The purpose of Revision 00 of this Analysis/Model Report (AMR) is to evaluate (by means of 2-D semianalytical and 3-D numerical models) the transport of radioactive solutes and colloids in the unsaturated zone (UZ) under ambient conditions from the potential repository horizon to the water table at Yucca Mountain (YM), Nevada

  15. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  16. Sampling Assumptions in Inductive Generalization

    Science.gov (United States)

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  17. Major Assumptions of Mastery Learning.

    Science.gov (United States)

    Anderson, Lorin W.

    Mastery learning can be described as a set of group-based, individualized, teaching and learning strategies based on the premise that virtually all students can and will, in time, learn what the school has to teach. Inherent in this description are assumptions concerning the nature of schools, classroom instruction, and learners. According to the…

  18. Models of bounded rationality under certainty

    NARCIS (Netherlands)

    Rasouli, S.; Timmermans, H.J.P.; Rasouli, S.; Timmermans, H.J.P.

    2015-01-01

    Purpose This chapter reviews models of decision-making and choice under conditions of certainty. It allows readers to position the contribution of the other chapters in this book in the historical development of the topic area. Theory Bounded rationality is defined in terms of a strategy to simplify

  19. Two-fluid model for locomotion under self-confinement

    Science.gov (United States)

    Reigh, Shang Yik; Lauga, Eric

    2017-09-01

    The bacterium Helicobacter pylori causes ulcers in the stomach of humans by invading mucus layers protecting epithelial cells. It does so by chemically changing the rheological properties of the mucus from a high-viscosity gel to a low-viscosity solution in which it may self-propel. We develop a two-fluid model for this process of swimming under self-generated confinement. We solve exactly for the flow and the locomotion speed of a spherical swimmer located in a spherically symmetric system of two Newtonian fluids whose boundary moves with the swimmer. We also treat separately the special case of an immobile outer fluid. In all cases, we characterize the flow fields, their spatial decay, and the impact of both the viscosity ratio and the degree of confinement on the locomotion speed of the model swimmer. The spatial decay of the flow retains the same power-law decay as for locomotion in a single fluid but with a decreased magnitude. Independent of the assumption chosen to characterize the impact of confinement on the actuation applied by the swimmer, its locomotion speed always decreases with an increase in the degree of confinement. Our modeling results suggest that a low-viscosity region of at least six times the effective swimmer size is required to lead to swimming with speeds similar to locomotion in an infinite fluid, corresponding to a region of size above ≈25 μ m for Helicobacter pylori.

  20. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    Rose, Jeremy; Aaen, Ivan; Nielsen, Peter Axel

    2008-01-01

    Thinking about improving the management of software development in software firms is dominated by one approach: the capability maturity model devised and administered at the Software Engineering Institute at Carnegie Mellon University. Though CMM, and its replacement CMMI are widely known and used...... thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...

  1. Occupancy estimation and the closure assumption

    Science.gov (United States)

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing

  2. Fiber Bundle Model Under Heterogeneous Loading

    Science.gov (United States)

    Roy, Subhadeep; Goswami, Sanchari

    2018-03-01

    The present work deals with the behavior of fiber bundle model under heterogeneous loading condition. The model is explored both in the mean-field limit as well as with local stress concentration. In the mean field limit, the failure abruptness decreases with increasing order k of heterogeneous loading. In this limit, a brittle to quasi-brittle transition is observed at a particular strength of disorder which changes with k. On the other hand, the model is hardly affected by such heterogeneity in the limit where local stress concentration plays a crucial role. The continuous limit of the heterogeneous loading is also studied and discussed in this paper. Some of the important results related to fiber bundle model are reviewed and their responses to our new scheme of heterogeneous loading are studied in details. Our findings are universal with respect to the nature of the threshold distribution adopted to assign strength to an individual fiber.

  3. Modeling of microstructural evolution under irradiation

    International Nuclear Information System (INIS)

    Odette, G.R.

    1979-01-01

    Microstructural evolution under irradiation is an extremely complex phenomenon involving numerous interacting mechanisms which alter both the microstructure and microchemistry of structural alloys. Predictive procedures which correlate primary irradiation and material variables to microstructural response are needed to extrapolate from the imperfect data base, which will be available, to fusion reactor conditions. Clearly, a marriage between models and experiments is needed. Specific steps to achieving such a marriage in the form of composite correlation model analysis are outlined and some preliminary results presented. The strongly correlated nature of microstructural evolution is emphasized and it is suggested that rate theory models, resting on the principle of material balances and focusing on coupled point defect-microchemical segregation processes, may be a practical approach to correlation model development. (orig.)

  4. Numerical modeling of materials under extreme conditions

    CERN Document Server

    Brown, Eric

    2014-01-01

    The book presents twelve state of the art contributions in the field of numerical modeling of materials subjected to large strain, high strain rates, large pressure and high stress triaxialities, organized into two sections. The first part is focused on high strain rate-high pressures such as those occurring in impact dynamics and shock compression related phenomena, dealing with material response identification, advanced modeling incorporating microstructure and damage, stress waves propagation in solids and structures response under impact. The latter part is focused on large strain-low strain rates applications such as those occurring in technological material processing, dealing with microstructure and texture evolution, material response at elevated temperatures, structural behavior under large strain and multi axial state of stress.

  5. Modeling steam pressure under martian lava flows

    Science.gov (United States)

    Dundas, Colin M.; Keszthelyi, Laszlo P.

    2013-01-01

    Rootless cones on Mars are a valuable indicator of past interactions between lava and water. However, the details of the lava–water interactions are not fully understood, limiting the ability to use these features to infer new information about past water on Mars. We have developed a model for the pressurization of a dry layer of porous regolith by melting and boiling ground ice in the shallow subsurface. This model builds on previous models of lava cooling and melting of subsurface ice. We find that for reasonable regolith properties and ice depths of decimeters, explosive pressures can be reached. However, the energy stored within such lags is insufficient to excavate thick flows unless they draw steam from a broader region than the local eruption site. These results indicate that lag pressurization can drive rootless cone formation under favorable circumstances, but in other instances molten fuel–coolant interactions are probably required. We use the model results to consider a range of scenarios for rootless cone formation in Athabasca Valles. Pressure buildup by melting and boiling ice under a desiccated lag is possible in some locations, consistent with the expected distribution of ice implanted from atmospheric water vapor. However, it is uncertain whether such ice has existed in the vicinity of Athabasca Valles in recent history. Plausible alternative sources include surface snow or an aqueous flood shortly before the emplacement of the lava flow.

  6. Modelling sulfamethoxazole degradation under different redox conditions

    Science.gov (United States)

    Sanchez-Vila, X.; Rodriguez-Escales, P.

    2015-12-01

    Sulfamethoxazole (SMX) is a low adsorptive, polar, sulfonamide antibiotic, widely present in aquatic environments. Degradation of SMX in subsurface porous media is spatially and temporally variable, depending on various environmental factors such as in situ redox potential, availability of nutrients, local soil characteristics, and temperature. It has been reported that SMX is better degraded under anoxic conditions and by co-metabolism processes. In this work, we first develop a conceptual model of degradation of SMX under different redox conditions (denitrification and iron reducing conditions), and second, we construct a mathematical model that allows reproducing different experiments of SMX degradation reported in the literature. The conceptual model focuses on the molecular behavior and contemplates the formation of different metabolites. The model was validated using the experimental data from Barbieri et al. (2012) and Mohatt et al. (2011). It adequately reproduces the reversible degradation of SMX under the presence of nitrite as an intermediate product of denitrification. In those experiments degradation was mediated by the transient formation of a diazonium cation, which was considered responsible of the substitution of the amine radical by a nitro radical, forming the 4-nitro-SMX. The formation of this metabolite is a reversible process, so that once the concentration of nitrite was back to zero due to further advancement of denitrification, the concentration of SMX was fully recovered. The forward reaction, formation of 4-nitro SMX, was modeled considering a kinetic of second order, whereas the backward reaction, dissociation of 4-nitro-SMX back to the original compound, could be modeled with a first order degradation reaction. Regarding the iron conditions, SMX was degraded due to the oxidation of iron (Fe2+), which was previously oxidized from goethite due to the degradation of a pool of labile organic carbon. As the oxidation of iron occurred on the

  7. Graphical models for inference under outcome-dependent sampling

    DEFF Research Database (Denmark)

    Didelez, V; Kreiner, S; Keiding, N

    2010-01-01

    a node for the sampling indicator, assumptions about sampling processes can be made explicit. We demonstrate how to read off such graphs whether consistent estimation of the association between exposure and outcome is possible. Moreover, we give sufficient graphical conditions for testing and estimating......We consider situations where data have been collected such that the sampling depends on the outcome of interest and possibly further covariates, as for instance in case-control studies. Graphical models represent assumptions about the conditional independencies among the variables. By including...

  8. Complex networks under dynamic repair model

    Science.gov (United States)

    Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

    2018-01-01

    Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

  9. Assumptions for the Annual Energy Outlook 1992

    International Nuclear Information System (INIS)

    1992-01-01

    This report serves a auxiliary document to the Energy Information Administration (EIA) publication Annual Energy Outlook 1992 (AEO) (DOE/EIA-0383(92)), released in January 1992. The AEO forecasts were developed for five alternative cases and consist of energy supply, consumption, and price projections by major fuel and end-use sector, which are published at a national level of aggregation. The purpose of this report is to present important quantitative assumptions, including world oil prices and macroeconomic growth, underlying the AEO forecasts. The report has been prepared in response to external requests, as well as analyst requirements for background information on the AEO and studies based on the AEO forecasts

  10. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption...

  11. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    Science.gov (United States)

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  12. Cognitive ageing on latent constructs for visual processing capacity: A novel Structural Equation Modelling framework with causal assumptions based on A Theory of Visual Attention

    Directory of Open Access Journals (Sweden)

    Simon eNielsen

    2015-01-01

    Full Text Available We examined the effects of normal ageing on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive ageing affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modelling (SEM; Model 2, informed by functional structures that were modelled with path analyses in SEM (Model 1. The results show that ageing effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM capacity (Model 2. These results are consistent with some studies reporting selective ageing effects on processing speed, and inconsistent with other studies reporting ageing effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive ageing effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  13. The theory of reasoned action as a model of marijuana use: tests of implicit assumptions and applicability to high-risk young women.

    Science.gov (United States)

    Morrison, Diane M; Golder, Seana; Keller, Thomas E; Gillmore, Mary Rogers

    2002-09-01

    The theory of reasoned action (TRA) is used to model decisions about substance use among young mothers who became premaritally pregnant at age 17 or younger. The results of structural equation modeling to test the TRA indicated that most relationships specified by the model were significant and in the predicted direction. Attitude was a stronger predictor of intention than norm, but both were significantly related to intention, and intention was related to actual marijuana use 6 months later. Outcome beliefs were bidimensional, and positive outcome beliefs, but not negative beliefs, were significantly related to attitude. Prior marijuana use was only partially mediated by the TRA variables; it also was directly related to intentions to use marijuana and to subsequent use.

  14. Radionuclide Transport Models Under Ambient Conditions

    International Nuclear Information System (INIS)

    Moridis, G.; Hu, Q.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to evaluate (by means of 2-D semianalytical and 3-D numerical models) the transport of radioactive solutes and colloids in the unsaturated zone (UZ) under ambient conditions from the potential repository horizon to the water table at Yucca Mountain (YM), Nevada. This is in accordance with the ''AMR Development Plan U0060, Radionuclide Transport Models Under Ambient Conditions'' (CRWMS M and O 1999a). This AMR supports the UZ Flow and Transport Process Model Report (PMR). This AMR documents the UZ Radionuclide Transport Model (RTM). This model considers: the transport of radionuclides through fractured tuffs; the effects of changes in the intensity and configuration of fracturing from hydrogeologic unit to unit; colloid transport; physical and retardation processes and the effects of perched water. In this AMR they document the capabilities of the UZ RTM, which can describe flow (saturated and/or unsaturated) and transport, and accounts for (a) advection, (b) molecular diffusion, (c) hydrodynamic dispersion (with full 3-D tensorial representation), (d) kinetic or equilibrium physical and/or chemical sorption (linear, Langmuir, Freundlich or combined), (e) first-order linear chemical reaction, (f) radioactive decay and tracking of daughters, (g) colloid filtration (equilibrium, kinetic or combined), and (h) colloid-assisted solute transport. Simulations of transport of radioactive solutes and colloids (incorporating the processes described above) from the repository horizon to the water table are performed to support model development and support studies for Performance Assessment (PA). The input files for these simulations include transport parameters obtained from other AMRs (i.e., CRWMS M and O 1999d, e, f, g, h; 2000a, b, c, d). When not available, the parameter values used are obtained from the literature. The results of the simulations are used to evaluate the transport of radioactive solutes and colloids, and

  15. An estimator of the survival function based on the semi-Markov model under dependent censorship.

    Science.gov (United States)

    Lee, Seung-Yeoun; Tsai, Wei-Yann

    2005-06-01

    Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.

  16. Stress-reducing preventive maintenance model for a unit under stressful environment

    International Nuclear Information System (INIS)

    Park, J.H.; Chang, Woojin; Lie, C.H.

    2012-01-01

    We develop a preventive maintenance (PM) model for a unit operated under stressful environment. The PM model in this paper consists of a failure rate model and two cost models to determine the optimal PM scheduling which minimizes a cost rate. The assumption for the proposed model is that stressful environment accelerates the failure of the unit and periodic maintenances reduce stress from outside. The failure rate model handles the maintenance effect of PM using improvement and stress factors. The cost models are categorized into two failure recognition cases: immediate failure recognition and periodic failure detection. The optimal PM scheduling is obtained by considering the trade-off between the related cost and the lifetime of a unit in our model setting. The practical usage of our proposed model is tested through a numerical example.

  17. Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.

    Science.gov (United States)

    Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander

    2017-11-01

    Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Modelling human eye under blast loading.

    Science.gov (United States)

    Esposito, L; Clemente, C; Bonora, N; Rossi, T

    2015-01-01

    Primary blast injury (PBI) is the general term that refers to injuries resulting from the mere interaction of a blast wave with the body. Although few instances of primary ocular blast injury, without a concomitant secondary blast injury from debris, are documented, some experimental studies demonstrate its occurrence. In order to investigate PBI to the eye, a finite element model of the human eye using simple constitutive models was developed. The material parameters were calibrated by a multi-objective optimisation performed on available eye impact test data. The behaviour of the human eye and the dynamics of mechanisms occurring under PBI loading conditions were modelled. For the generation of the blast waves, different combinations of explosive (trinitrotoluene) mass charge and distance from the eye were analysed. An interpretation of the resulting pressure, based on the propagation and reflection of the waves inside the eye bulb and orbit, is proposed. The peculiar geometry of the bony orbit (similar to a frustum cone) can induce a resonance cavity effect and generate a pressure standing wave potentially hurtful for eye tissues.

  19. Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions

    Science.gov (United States)

    Wang, XiaoCong; Liu, YiMin; Bao, Qing

    2016-01-01

    Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and

  20. Big Bang Titanic: New Dark Energy (Vacuum Gravity) Cosmic Model Emerges Upon Falsification of The Big Bang By Disproof of Its Central Assumptions

    Science.gov (United States)

    Gentry, Robert

    2011-04-01

    Physicists who identify the big bang with the early universe should have first noted from Hawking's A Brief History of Time, p. 42, that he ties Hubble's law to Doppler shifts from galaxy recession from a nearby center, not to bb's unvalidated and thus problematical expansion redshifts. Our PRL submission LJ12135 describes such a model, but in it Hubble's law is due to Doppler and vacuum gravity effects, the 2.73K CBR is vacuum gravity shifted blackbody cavity radiation from an outer galactic shell, and its (1 + z)-1 dilation and (M,z) relations closely fit high-z SNe Ia data; all this strongly implies our model's vacuum energy is the elusive dark energy. We also find GPS operation's GR effects falsify big bang's in-flight expansion redshift paradigm, and hence the big bang, by showing λ changes occur only at emission. Surprisingly we also discover big bang's CBR prediction is T 0, while galactic photons shrink dλ/dt < 0. Contrary to a PRL editor's claim, the above results show LJ12135 fits PRL guidelines for papers that replace established theories. For details see alphacosmos.net.

  1. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  2. Multitask Quantile Regression under the Transnormal Model.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2016-01-01

    We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.

  3. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    Science.gov (United States)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  4. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Directory of Open Access Journals (Sweden)

    Luca eCaricchi

    2016-04-01

    Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  5. Thermodynamic parameters for mixtures of quartz under shock wave loading in views of the equilibrium model

    International Nuclear Information System (INIS)

    Maevskii, K. K.; Kinelovskii, S. A.

    2015-01-01

    The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described

  6. Leakage-Resilient Circuits without Computational Assumptions

    DEFF Research Database (Denmark)

    Dziembowski, Stefan; Faust, Sebastian

    2012-01-01

    Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage...... on computational assumptions, our results are purely information-theoretic. In particular, we do not make use of public key encryption, which was required in all previous works...... into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied...

  7. Topographic controls on shallow groundwater levels in a steep, prealpine catchment: When are the TWI assumptions valid?

    NARCIS (Netherlands)

    Rinderer, M.; van Meerveld, H.J.; Seibert, J.

    2014-01-01

    Topographic indices like the Topographic Wetness Index (TWI) have been used to predict spatial patterns of average groundwater levels and to model the dynamics of the saturated zone during events (e.g., TOPMODEL). However, the assumptions underlying the use of the TWI in hydrological models, of

  8. A novel modeling approach for job shop scheduling problem under uncertainty

    Directory of Open Access Journals (Sweden)

    Behnam Beheshti Pur

    2013-11-01

    Full Text Available When aiming on improving efficiency and reducing cost in manufacturing environments, production scheduling can play an important role. Although a common workshop is full of uncertainties, when using mathematical programs researchers have mainly focused on deterministic problems. After briefly reviewing and discussing popular modeling approaches in the field of stochastic programming, this paper proposes a new approach based on utility theory for a certain range of problems and under some practical assumptions. Expected utility programming, as the proposed approach, will be compared with the other well-known methods and its meaningfulness and usefulness will be illustrated via a numerical examples and a real case.

  9. A Test of the Fundamental Physics Underlying Exoplanet Climate Models

    Science.gov (United States)

    Beatty, Thomas; Keating, Dylan; Cowan, Nick; Gaudi, Scott; Kataria, Tiffany; Fortney, Jonathan; Stassun, Keivan; Collins, Karen; Deming, Drake; Bell, Taylor; Dang, Lisa; Rogers, Tamara; Colon, Knicole

    2018-05-01

    A fundamental issue in how we understand exoplanet atmospheres is the assumed physical behavior underlying 3D global circulation models (GCMs). Modeling an entire 3D atmosphere is a Herculean task, and so in exoplanet GCMs we generally assume that there are no clouds, no magnetic effects, and chemical equilibrium (e.g., Kataria et al 2016). These simplifying assumptions are computationally necessary, but at the same time their exclusion allows for a large theoretical lee-way when comparing to data. Thus, though significant discrepancies exist between almost all a priori GCM predictions and their corresponding observations, these are assumed to be due to the lack of clouds, or atmospheric drag, or chemical disequilibrium, in the models (e.g., Wong et al. 2016, Stevenson et al. 2017, Lewis et al. 2017, Zhang et al. 2018). Since these effects compete with one another and have large uncertainties, this makes tests of the fundamental physics in GCMs extremely difficult. To rectify this, we propose to use 88.4 hours of Spitzer time to observe 3.6um and 4.5um phase curves of the transiting giant planet KELT-9b. KELT-9b has an observed dayside temperature of 4600K (Gaudi et al. 2017), which means that there will very likely be no clouds on the day- or nightside, and is hot enough that the atmosphere should be close to local chemical equilibrium. Additionally, we plan to leverage KELT-9b's high temperature to make the first measurement of global wind speed on an exoplanet (Bell & Cowan 2018), giving a constraint on atmospheric drag and magnetic effects. Combined, this means KELT-9b is close to a real-world GCM, without most of the effects present on lower temperature planets. Additionally, since KELT-9b orbits an extremely bright host star these will be the highest signal-to-noise ratio phase curves taken with Spitzer by more than a factor of two. This gives us a unique opportunity to make the first precise and direct investigation into the fundamental physics that are the

  10. Peacebuilding: assumptions, practices and critiques

    Directory of Open Access Journals (Sweden)

    Cravo, Teresa Almeida

    2017-05-01

    Full Text Available Peacebuilding has become a guiding principle of international intervention in the periphery since its inclusion in the Agenda for Peace of the United Nations in 1992. The aim of creating the conditions for a self-sustaining peace in order to prevent a return to armed conflict is, however, far from easy or consensual. The conception of liberal peace proved particularly limited, and inevitably controversial, and the reality of war-torn societies far more complex than anticipated by international actors that today assume activities in the promotion of peace in post-conflict contexts. With a trajectory full of contested successes and some glaring failures, the current model has been the target of harsh criticism and widespread scepticism. This article critically examines the theoretical background and practicalities of peacebuilding, exploring its ambition as well as the weaknesses of the paradigm adopted by the international community since the 1990s.

  11. Assumptions and Policy Decisions for Vital Area Identification Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myungsu; Bae, Yeon-Kyoung; Lee, Youngseung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    U.S. Nuclear Regulatory Commission and IAEA guidance indicate that certain assumptions and policy questions should be addressed to a Vital Area Identification (VAI) process. Korea Hydro and Nuclear Power conducted a VAI based on current Design Basis Threat and engineering judgement to identify APR1400 vital areas. Some of the assumptions were inherited from Probabilistic Safety Assessment (PSA) as a sabotage logic model was based on PSA logic tree and equipment location data. This paper illustrates some important assumptions and policy decisions for APR1400 VAI analysis. Assumptions and policy decisions could be overlooked at the beginning stage of VAI, however they should be carefully reviewed and discussed among engineers, plant operators, and regulators. Through APR1400 VAI process, some of the policy concerns and assumptions for analysis were applied based on document research and expert panel discussions. It was also found that there are more assumptions to define for further studies for other types of nuclear power plants. One of the assumptions is mission time, which was inherited from PSA.

  12. How Symmetrical Assumptions Advance Strategic Management Research

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Hallberg, Hallberg

    2014-01-01

    We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other...... application domains of the theory. We argue that assumptional symmetry leads to theoretical advancement by promoting the development of theory with greater falsifiability and stronger ontological grounding. Thus, strategic management theory may be advanced by systematically searching for asymmetrical...

  13. Modelling of diurnal cycle under climate change

    Energy Technology Data Exchange (ETDEWEB)

    Eliseev, A V; Bezmenov, K V; Demchenko, P F; Mokhov, I I; Petoukhov, V K [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Atmospheric Physics

    1996-12-31

    The observed diurnal temperature range (DTR) displays remarkable change during last 30 years. Land air DTR generally decreases under global climate warming due to more significant night minimum temperature increase in comparison with day maximum temperature increase. Atmosphere hydrological cycle characteristics change under global warming and possible background aerosol atmosphere content change may cause essential errors in the estimation of DTR tendencies of change under global warming. The result of this study is the investigation of cloudiness effect on the DTR and blackbody radiative emissivity diurnal range. It is shown that in some cases (particularly in cold seasons) it results in opposite change in DTR and BD at doubled CO{sub 2} atmosphere content. The influence of background aerosol is the same as the cloudiness one

  14. Modelling of diurnal cycle under climate change

    Energy Technology Data Exchange (ETDEWEB)

    Eliseev, A.V.; Bezmenov, K.V.; Demchenko, P.F.; Mokhov, I.I.; Petoukhov, V.K. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Atmospheric Physics

    1995-12-31

    The observed diurnal temperature range (DTR) displays remarkable change during last 30 years. Land air DTR generally decreases under global climate warming due to more significant night minimum temperature increase in comparison with day maximum temperature increase. Atmosphere hydrological cycle characteristics change under global warming and possible background aerosol atmosphere content change may cause essential errors in the estimation of DTR tendencies of change under global warming. The result of this study is the investigation of cloudiness effect on the DTR and blackbody radiative emissivity diurnal range. It is shown that in some cases (particularly in cold seasons) it results in opposite change in DTR and BD at doubled CO{sub 2} atmosphere content. The influence of background aerosol is the same as the cloudiness one

  15. An EPQ Inventory Model with Allowable Shortages for Deteriorating Items under Trade Credit Policy

    Directory of Open Access Journals (Sweden)

    Zohreh Molamohamadi

    2014-01-01

    Full Text Available This paper attempts to obtain the replenishment policy of a manufacturer under EPQ inventory model with backorder. It is assumed here that the manufacturer delays paying for the received goods from the supplier and the items start deteriorating as soon as they are being produced. Based on these assumptions, the manufacturer’s inventory model is formulated, and cuckoo search algorithm is applied then to find the replenishment time, order quantity, and selling price with the objective of maximizing the manufacturer’s total net profit. Besides, the traditional inventory system is shown as a special case of the proposed model in this paper, and numerical examples are given to demonstrate better performance of trade credit. These examples are also used to compare the results of cuckoo search algorithm with genetic algorithm and investigate the effects of the model parameters on its variables and net profit.

  16. Projecting hydropower production under future climates: a review of modelling challenges and open questions

    Science.gov (United States)

    Schaefli, Bettina

    2015-04-01

    Hydropower is a pillar for renewable electricity production in almost all world regions. The planning horizon of major hydropower infrastructure projects stretches over several decades and consideration of evolving climatic conditions plays an ever increasing role. This review of model-based climate change impact assessments provides a synthesis of the wealth of underlying modelling assumptions, highlights the importance of local factors and attempts to identify the most urgent open questions. Based on existing case studies, it critically discusses whether current hydro-climatic modelling frameworks are likely to provide narrow enough water scenario ranges to be included into economic analyses for end-to-end climate change impact assessments including electricity market models. This will be completed with an overview of not or indirectly climate-related boundary conditions, such as economic growth, legal constraints, national subsidy frameworks or growing competition for water, which might locally largely outweigh any climate change impacts.

  17. Wrong assumptions in the financial crisis

    NARCIS (Netherlands)

    Aalbers, M.B.

    2009-01-01

    Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to

  18. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    Science.gov (United States)

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  19. Reward optimization in the primate brain: a probabilistic model of decision making under uncertainty.

    Directory of Open Access Journals (Sweden)

    Yanping Huang

    Full Text Available A key problem in neuroscience is understanding how the brain makes decisions under uncertainty. Important insights have been gained using tasks such as the random dots motion discrimination task in which the subject makes decisions based on noisy stimuli. A descriptive model known as the drift diffusion model has previously been used to explain psychometric and reaction time data from such tasks but to fully explain the data, one is forced to make ad-hoc assumptions such as a time-dependent collapsing decision boundary. We show that such assumptions are unnecessary when decision making is viewed within the framework of partially observable Markov decision processes (POMDPs. We propose an alternative model for decision making based on POMDPs. We show that the motion discrimination task reduces to the problems of (1 computing beliefs (posterior distributions over the unknown direction and motion strength from noisy observations in a bayesian manner, and (2 selecting actions based on these beliefs to maximize the expected sum of future rewards. The resulting optimal policy (belief-to-action mapping is shown to be equivalent to a collapsing decision threshold that governs the switch from evidence accumulation to a discrimination decision. We show that the model accounts for both accuracy and reaction time as a function of stimulus strength as well as different speed-accuracy conditions in the random dots task.

  20. Double diffusivity model under stochastic forcing

    Science.gov (United States)

    Chattopadhyay, Amit K.; Aifantis, Elias C.

    2017-05-01

    The "double diffusivity" model was proposed in the late 1970s, and reworked in the early 1980s, as a continuum counterpart to existing discrete models of diffusion corresponding to high diffusivity paths, such as grain boundaries and dislocation lines. It was later rejuvenated in the 1990s to interpret experimental results on diffusion in polycrystalline and nanocrystalline specimens where grain boundaries and triple grain boundary junctions act as high diffusivity paths. Technically, the model pans out as a system of coupled Fick-type diffusion equations to represent "regular" and "high" diffusivity paths with "source terms" accounting for the mass exchange between the two paths. The model remit was extended by analogy to describe flow in porous media with double porosity, as well as to model heat conduction in media with two nonequilibrium local temperature baths, e.g., ion and electron baths. Uncoupling of the two partial differential equations leads to a higher-ordered diffusion equation, solutions of which could be obtained in terms of classical diffusion equation solutions. Similar equations could also be derived within an "internal length" gradient (ILG) mechanics formulation applied to diffusion problems, i.e., by introducing nonlocal effects, together with inertia and viscosity, in a mechanics based formulation of diffusion theory. While being remarkably successful in studies related to various aspects of transport in inhomogeneous media with deterministic microstructures and nanostructures, its implications in the presence of stochasticity have not yet been considered. This issue becomes particularly important in the case of diffusion in nanopolycrystals whose deterministic ILG-based theoretical calculations predict a relaxation time that is only about one-tenth of the actual experimentally verified time scale. This article provides the "missing link" in this estimation by adding a vital element in the ILG structure, that of stochasticity, that takes into

  1. Judging statistical models of individual decision making under risk using in- and out-of-sample criteria.

    Science.gov (United States)

    Drichoutis, Andreas C; Lusk, Jayson L

    2014-01-01

    Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.

  2. Judging statistical models of individual decision making under risk using in- and out-of-sample criteria.

    Directory of Open Access Journals (Sweden)

    Andreas C Drichoutis

    Full Text Available Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.

  3. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    Science.gov (United States)

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  4. Modeling heat stress under different environmental conditions.

    Science.gov (United States)

    Carabaño, M J; Logar, B; Bormann, J; Minet, J; Vanrobays, M-L; Díaz, C; Tychon, B; Gengler, N; Hammami, H

    2016-05-01

    Renewed interest in heat stress effects on livestock productivity derives from climate change, which is expected to increase temperatures and the frequency of extreme weather events. This study aimed at evaluating the effect of temperature and humidity on milk production in highly selected dairy cattle populations across 3 European regions differing in climate and production systems to detect differences and similarities that can be used to optimize heat stress (HS) effect modeling. Milk, fat, and protein test day data from official milk recording for 1999 to 2010 in 4 Holstein populations located in the Walloon Region of Belgium (BEL), Luxembourg (LUX), Slovenia (SLO), and southern Spain (SPA) were merged with temperature and humidity data provided by the state meteorological agencies. After merging, the number of test day records/cows per trait ranged from 686,726/49,655 in SLO to 1,982,047/136,746 in BEL. Values for the daily average and maximum temperature-humidity index (THIavg and THImax) ranges for THIavg/THImax were largest in SLO (22-74/28-84) and shortest in SPA (39-76/46-83). Change point techniques were used to determine comfort thresholds, which differed across traits and climatic regions. Milk yield showed an inverted U-shaped pattern of response across the THI scale with a HS threshold around 73 THImax units. For fat and protein, thresholds were lower than for milk yield and were shifted around 6 THI units toward larger values in SPA compared with the other countries. Fat showed lower HS thresholds than protein traits in all countries. The traditional broken line model was compared with quadratic and cubic fits of the pattern of response in production to increasing heat loads. A cubic polynomial model allowing for individual variation in patterns of response and THIavg as heat load measure showed the best statistical features. Higher/lower producing animals showed less/more persistent production (quantity and quality) across the THI scale. The

  5. Animal behavior models of the mechanisms underlying antipsychotic atypicality.

    NARCIS (Netherlands)

    Geyer, M.A.; Ellenbroek, B.A.

    2003-01-01

    This review describes the animal behavior models that provide insight into the mechanisms underlying the critical differences between the actions of typical vs. atypical antipsychotic drugs. Although many of these models are capable of differentiating between antipsychotic and other psychotropic

  6. Formalization and Analysis of Reasoning by Assumption

    OpenAIRE

    Bosse, T.; Jonker, C.M.; Treur, J.

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been speci...

  7. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  8. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  9. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    Science.gov (United States)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last

  10. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    ) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...

  11. Interface Input/Output Automata: Splitting Assumptions from Guarantees

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Nyman, Ulrik; Wasowski, Andrzej

    2006-01-01

    's \\IOAs [11], relying on a context dependent notion of refinement based on relativized language inclusion. There are two main contributions of the work. First, we explicitly separate assumptions from guarantees, increasing the modeling power of the specification language and demonstrating an interesting...

  12. Measuring Productivity Change without Neoclassical Assumptions: A Conceptual Analysis

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2008-01-01

    textabstractThe measurement of productivity change (or difference) is usually based on models that make use of strong assumptions such as competitive behaviour and constant returns to scale. This survey discusses the basics of productivity measurement and shows that one can dispense with most if not

  13. Monitoring Assumptions in Assume-Guarantee Contracts

    Directory of Open Access Journals (Sweden)

    Oleg Sokolsky

    2016-05-01

    Full Text Available Pre-deployment verification of software components with respect to behavioral specifications in the assume-guarantee form does not, in general, guarantee absence of errors at run time. This is because assumptions about the environment cannot be discharged until the environment is fixed. An intuitive approach is to complement pre-deployment verification of guarantees, up to the assumptions, with post-deployment monitoring of environment behavior to check that the assumptions are satisfied at run time. Such a monitor is typically implemented by instrumenting the application code of the component. An additional challenge for the monitoring step is that environment behaviors are typically obtained through an I/O library, which may alter the component's view of the input format. This transformation requires us to introduce a second pre-deployment verification step to ensure that alarms raised by the monitor would indeed correspond to violations of the environment assumptions. In this paper, we describe an approach for constructing monitors and verifying them against the component assumption. We also discuss limitations of instrumentation-based monitoring and potential ways to overcome it.

  14. Formalization and analysis of reasoning by assumption.

    Science.gov (United States)

    Bosse, Tibor; Jonker, Catholijn M; Treur, Jan

    2006-01-02

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been specified, some of which are considered characteristic for the reasoning pattern, whereas some other properties can be used to discriminate among different approaches to the reasoning. These properties have been automatically checked for the traces acquired in experiments undertaken. The approach turned out to be beneficial from two perspectives. First, checking characteristic properties contributes to the empirical validation of a theory on reasoning by assumption. Second, checking discriminating properties allows the analyst to identify different classes of human reasoners. 2006 Lawrence Erlbaum Associates, Inc.

  15. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  16. Pricing Participating Products under a Generalized Jump-Diffusion Model

    Directory of Open Access Journals (Sweden)

    Tak Kuen Siu

    2008-01-01

    Full Text Available We propose a model for valuing participating life insurance products under a generalized jump-diffusion model with a Markov-switching compensator. It also nests a number of important and popular models in finance, including the classes of jump-diffusion models and Markovian regime-switching models. The Esscher transform is employed to determine an equivalent martingale measure. Simulation experiments are conducted to illustrate the practical implementation of the model and to highlight some features that can be obtained from our model.

  17. MODELING OF THE BEHAVIOUR REOLOGICHESKIH TEL UNDER DIFFERENT LAW NAGRUZHENIYA

    Directory of Open Access Journals (Sweden)

    V. V. Bendyukov

    2014-01-01

    Full Text Available The Offered model of the behaviour reologicheskogo bodies (the viscous-elasticity of the materia, designs or systems under controlling influence of the load, acting on given law for some time.

  18. Modelling comonotonic group-life under dependent decrement causes

    OpenAIRE

    Wang, Dabuxilatu

    2011-01-01

    Comonotonicity had been a extreme case of dependency between random variables. This article consider an extension of single life model under multiple dependent decrement causes to the case of comonotonic group-life.

  19. A sliding mode observer for hemodynamic characterization under modeling uncertainties

    KAUST Repository

    Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2014-01-01

    This paper addresses the case of physiological states reconstruction in a small region of the brain under modeling uncertainties. The misunderstood coupling between the cerebral blood volume and the oxygen extraction fraction has lead to a partial

  20. DISPLACE: a dynamic, individual-based model for spatial fishing planning and effort displacement: Integrating underlying fish population models

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Miethe, Tanja

    or to the alteration of individual fishing patterns. We demonstrate that integrating the spatial activity of vessels and local fish stock abundance dynamics allow for interactions and more realistic predictions of fishermen behaviour, revenues and stock abundance......We previously developed a spatially explicit, individual-based model (IBM) evaluating the bio-economic efficiency of fishing vessel movements between regions according to the catching and targeting of different species based on the most recent high resolution spatial fishery data. The main purpose...... was to test the effects of alternative fishing effort allocation scenarios related to fuel consumption, energy efficiency (value per litre of fuel), sustainable fish stock harvesting, and profitability of the fisheries. The assumption here was constant underlying resource availability. Now, an advanced...

  1. The homogeneous marginal utility of income assumption

    NARCIS (Netherlands)

    Demuynck, T.

    2015-01-01

    We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and

  2. Causal Mediation Analysis: Warning! Assumptions Ahead

    Science.gov (United States)

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  3. Critically Challenging Some Assumptions in HRD

    Science.gov (United States)

    O'Donnell, David; McGuire, David; Cross, Christine

    2006-01-01

    This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…

  4. Formalization and Analysis of Reasoning by Assumption

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning

  5. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    Science.gov (United States)

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  6. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    Science.gov (United States)

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  7. Shattering world assumptions: A prospective view of the impact of adverse events on world assumptions.

    Science.gov (United States)

    Schuler, Eric R; Boals, Adriel

    2016-05-01

    Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. A Note on the Fundamental Theorem of Asset Pricing under Model Uncertainty

    Directory of Open Access Journals (Sweden)

    Erhan Bayraktar

    2014-10-01

    Full Text Available We show that the recent results on the Fundamental Theorem of Asset Pricing and the super-hedging theorem in the context of model uncertainty can be extended to the case in which the options available for static hedging (hedging options are quoted with bid-ask spreads. In this set-up, we need to work with the notion of robust no-arbitrage which turns out to be equivalent to no-arbitrage under the additional assumption that hedging options with non-zero spread are non-redundant. A key result is the closedness of the set of attainable claims, which requires a new proof in our setting.

  9. PREDICTION OF BLOOD PATTERN IN S-SHAPED MODEL OF ARTERY UNDER NORMAL BLOOD PRESSURE

    Directory of Open Access Journals (Sweden)

    Mohd Azrul Hisham Mohd Adib

    2013-06-01

    Full Text Available Athletes are susceptible to a wide variety of traumatic and non-traumatic vascular injuries to the lower limb. This paper aims to predict the three-dimensional flow pattern of blood through an S-shaped geometrical artery model. This model has created by using Fluid Structure Interaction (FSI software. The modeling of the geometrical S-shaped artery is suitable for understanding the pattern of blood flow under constant normal blood pressure. In this study, a numerical method is used that works on the assumption that the blood is incompressible and Newtonian; thus, a laminar type of flow can be considered. The authors have compared the results with a previous study with FSI validation simulation. The validation and verification of the simulation studies is performed by comparing the maximum velocity at t = 0.4 s, because at this time, the blood accelerates rapidly. In addition, the resulting blood flow at various times, under the same boundary conditions in the S-shaped geometrical artery model, is presented. The graph shows that velocity increases linearly with time. Thus, it can be concluded that the flow of blood increases with respect to the pressure inside the body.

  10. A simplified model of choice behavior under uncertainty

    Directory of Open Access Journals (Sweden)

    Ching-Hung Lin

    2016-08-01

    Full Text Available The Iowa Gambling Task (IGT has been standardized as a clinical assessment tool (Bechara, 2007. Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU model (Busemeyer and Stout, 2002 to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated the prospect utility (PU models (Ahn et al., 2008 to be more effective than the EU models in the IGT. Nevertheless, after some preliminary tests, we propose that Ahn et al. (2008 PU model is not optimal due to some incompatible results between our behavioral and modeling data. This study aims to modify Ahn et al. (2008 PU model to a simplified model and collected 145 subjects’ IGT performance as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly while α approaching zero. More specifically, we retested the key parameters α, λ , and A in the PU model. Notably, the power of influence of the parameters α, λ, and A has a hierarchical order in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay-loss-shift rather than foreseeing the long-term outcome. However, there still have other behavioral variables that are not well revealed under these dynamic uncertainty situations. Therefore, the optimal behavioral models may not have been found. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated.

  11. A new model for friction under shock conditions

    Directory of Open Access Journals (Sweden)

    Dambakizi F.

    2011-01-01

    Full Text Available This article is aimed at the developpement of a new model for friction under shock conditions. Thanks to a subgrid model and a specific Coulomb friction law, it takes into account the interface temperature and deformation but also the influence of asperities when the contact pressure is relatively low (≤ 3 GPa.

  12. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  13. Damage Model of Reinforced Concrete Members under Cyclic Loading

    Science.gov (United States)

    Wei, Bo Chen; Zhang, Jing Shu; Zhang, Yin Hua; Zhou, Jia Lai

    2018-06-01

    Based on the Kumar damage model, a new damage model for reinforced concrete members is established in this paper. According to the damage characteristics of reinforced concrete members subjected to cyclic loading, four judgment conditions for determining the rationality of damage models are put forward. An ideal damage index (D) is supposed to vary within a scale of zero (no damage) to one (collapse). D should be a monotone increasing function which tends to increase in the case of the same displacement amplitude. As for members under large displacement amplitude loading, the growth rate of D should be greater than that of D under small amplitude displacement loading. Subsequently, the Park-Ang damage model, the Niu-Ren damage model, the Lu-Wang damage model and the proposed damage model are analyzed for 30 experimental reinforced concrete members, including slabs, walls, beams and columns. The results show that current damage models do not fully matches the reasonable judgment conditions, but the proposed damage model does. Therefore, a conclusion can be drawn that the proposed damage model can be used for evaluating and predicting damage performance of RC members under cyclic loading.

  14. Cascading failures in interdependent systems under a flow redistribution model

    Science.gov (United States)

    Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman

    2018-02-01

    Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.

  15. Skinfold creep under load of caliper. Linear visco- and poroelastic model simulations.

    Science.gov (United States)

    Nowak, Joanna; Nowak, Bartosz; Kaczmarek, Mariusz

    2015-01-01

    This paper addresses the diagnostic idea proposed in [11] to measure the parameter called rate of creep of axillary fold of tissue using modified Harpenden skinfold caliper in order to distinguish normal and edematous tissue. Our simulations are intended to help understanding the creep phenomenon and creep rate parameter as a sensitive indicator of edema existence. The parametric analysis shows the tissue behavior under the external load as well as its sensitivity to changes of crucial hydro-mechanical tissue parameters, e.g., permeability or stiffness. The linear viscoelastic and poroelastic models of normal (single phase) and oedematous tissue (twophase: swelled tissue with excess of interstitial fluid) implemented in COMSOL Multiphysics environment are used. Simulations are performed within the range of small strains for a simplified fold geometry, material characterization and boundary conditions. The predicted creep is the result of viscosity (viscoelastic model) or pore fluid displacement (poroelastic model) in tissue. The tissue deformations, interstitial fluid pressure as well as interstitial fluid velocity are discussed in parametric analysis with respect to elasticity modulus, relaxation time or permeability of tissue. The creep rate determined within the models of tissue is compared and referred to the diagnostic idea in [11]. The results obtained from the two linear models of subcutaneous tissue indicate that the form of creep curve and the creep rate are sensitive to material parameters which characterize the tissue. However, the adopted modelling assumptions point to a limited applicability of the creep rate as the discriminant of oedema.

  16. Evaluating methodological assumptions of a catch-curve survival estimation of unmarked precocial shorebird chickes

    Science.gov (United States)

    McGowan, Conor P.; Gardner, Beth

    2013-01-01

    Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.

  17. Towards New Probabilistic Assumptions in Business Intelligence

    OpenAIRE

    Schumann Andrew; Szelc Andrzej

    2015-01-01

    One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot ...

  18. The 'revealed preferences' theory: Assumptions and conjectures

    International Nuclear Information System (INIS)

    Green, C.H.

    1983-01-01

    Being kind of intuitive psychology the 'Revealed-Preferences'- theory based approaches towards determining the acceptable risks are a useful method for the generation of hypotheses. In view of the fact that reliability engineering develops faster than methods for the determination of reliability aims the Revealed-Preferences approach is a necessary preliminary help. Some of the assumptions on which the 'Revealed-Preferences' theory is based will be identified and analysed and afterwards compared with experimentally obtained results. (orig./DG) [de

  19. How to Handle Assumptions in Synthesis

    Directory of Open Access Journals (Sweden)

    Roderick Bloem

    2014-07-01

    Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

  20. Estimating true evolutionary distances under the DCJ model.

    Science.gov (United States)

    Lin, Yu; Moret, Bernard M E

    2008-07-01

    Modern techniques can yield the ordering and strandedness of genes on each chromosome of a genome; such data already exists for hundreds of organisms. The evolutionary mechanisms through which the set of the genes of an organism is altered and reordered are of great interest to systematists, evolutionary biologists, comparative genomicists and biomedical researchers. Perhaps the most basic concept in this area is that of evolutionary distance between two genomes: under a given model of genomic evolution, how many events most likely took place to account for the difference between the two genomes? We present a method to estimate the true evolutionary distance between two genomes under the 'double-cut-and-join' (DCJ) model of genome rearrangement, a model under which a single multichromosomal operation accounts for all genomic rearrangement events: inversion, transposition, translocation, block interchange and chromosomal fusion and fission. Our method relies on a simple structural characterization of a genome pair and is both analytically and computationally tractable. We provide analytical results to describe the asymptotic behavior of genomes under the DCJ model, as well as experimental results on a wide variety of genome structures to exemplify the very high accuracy (and low variance) of our estimator. Our results provide a tool for accurate phylogenetic reconstruction from multichromosomal gene rearrangement data as well as a theoretical basis for refinements of the DCJ model to account for biological constraints. All of our software is available in source form under GPL at http://lcbb.epfl.ch.

  1. Modeling size effects on fatigue life of a zirconium-based bulk metallic glass under bending

    International Nuclear Information System (INIS)

    Yuan Tao; Wang Gongyao; Feng Qingming; Liaw, Peter K.; Yokoyama, Yoshihiko; Inoue, Akihisa

    2013-01-01

    A size effect on the fatigue-life cycles of a Zr 50 Cu 30 Al 10 Ni 10 (at.%) bulk metallic glass has been observed in the four-point-bending fatigue experiment. Under the same bending-stress condition, large-sized samples tend to exhibit longer fatigue lives than small-sized samples. This size effect on the fatigue life cannot be satisfactorily explained by the flaw-based Weibull theories. Based on the experimental results, this study explores possible approaches to modeling the size effects on the bending-fatigue life of bulk metallic glasses, and proposes two fatigue-life models based on the Weibull distribution. The first model assumes, empirically, log-linear effects of the sample thickness on the Weibull parameters. The second model incorporates the mechanistic knowledge of the fatigue behavior of metallic glasses, and assumes that the shear-band density, instead of the flaw density, has significant influence on the bending fatigue-life cycles. Promising predictive results provide evidence of the potential validity of the models and their assumptions.

  2. The sufficiency assumption of the reasoned approach to action

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2015-12-01

    Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.

  3. Data-driven smooth tests of the proportional hazards assumption

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2007-01-01

    Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007

  4. One-dimensional models of thermal activation under shear stress

    CSIR Research Space (South Africa)

    Nabarro, FRN

    2003-01-01

    Full Text Available - dimensional models presented here may illuminate the study of more realistic models. For the model in which as many dislocations are poised for backward jumps as for forward jumps, the experimental activation volume Vye(C27a) under applied stresses close to C...27a is different from the true activation volume V(C27) evaluated at C27 ?C27a. The relations between the two are developed. A model is then discussed in which fewer dislocations are available for backward than for forward jumps. Finally...

  5. Generative models versus underlying symmetries to explain biological pattern.

    Science.gov (United States)

    Frank, S A

    2014-06-01

    Mathematical models play an increasingly important role in the interpretation of biological experiments. Studies often present a model that generates the observations, connecting hypothesized process to an observed pattern. Such generative models confirm the plausibility of an explanation and make testable hypotheses for further experiments. However, studies rarely consider the broad family of alternative models that match the same observed pattern. The symmetries that define the broad class of matching models are in fact the only aspects of information truly revealed by observed pattern. Commonly observed patterns derive from simple underlying symmetries. This article illustrates the problem by showing the symmetry associated with the observed rate of increase in fitness in a constant environment. That underlying symmetry reveals how each particular generative model defines a single example within the broad class of matching models. Further progress on the relation between pattern and process requires deeper consideration of the underlying symmetries. © 2014 The Author. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  6. Modelling human behaviours and reactions under dangerous environment

    OpenAIRE

    Kang, J; Wright, D K; Qin, S F; Zhao, Y

    2005-01-01

    This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions...

  7. Numerical solution of dynamic equilibrium models under Poisson uncertainty

    DEFF Research Database (Denmark)

    Posch, Olaf; Trimborn, Timo

    2013-01-01

    We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations of the retar...... solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households....

  8. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  9. An Agent-Based Model of School Closing in Under-Vacccinated Communities During Measles Outbreaks.

    Science.gov (United States)

    Getz, Wayne M; Carlson, Colin; Dougherty, Eric; Porco Francis, Travis C; Salter, Richard

    2016-04-01

    The winter 2014-15 measles outbreak in the US represents a significant crisis in the emergence of a functionally extirpated pathogen. Conclusively linking this outbreak to decreases in the measles/mumps/rubella (MMR) vaccination rate (driven by anti-vaccine sentiment) is critical to motivating MMR vaccination. We used the NOVA modeling platform to build a stochastic, spatially-structured, individual-based SEIR model of outbreaks, under the assumption that R 0 ≈ 7 for measles. We show this implies that herd immunity requires vaccination coverage of greater than approximately 85%. We used a network structured version of our NOVA model that involved two communities, one at the relatively low coverage of 85% coverage and one at the higher coverage of 95%, both of which had 400-student schools embedded, as well as students occasionally visiting superspreading sites (e.g. high-density theme parks, cinemas, etc.). These two vaccination coverage levels are within the range of values occurring across California counties. Transmission rates at schools and superspreading sites were arbitrarily set to respectively 5 and 15 times background community rates. Simulations of our model demonstrate that a 'send unvaccinated students home' policy in low coverage counties is extremely effective at shutting down outbreaks of measles.

  10. New Assumptions to Guide SETI Research

    Science.gov (United States)

    Colombano, S. P.

    2018-01-01

    The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.

  11. Road Impedance Model Study under the Control of Intersection Signal

    Directory of Open Access Journals (Sweden)

    Yunlin Luo

    2015-01-01

    Full Text Available Road traffic impedance model is a difficult and critical point in urban traffic assignment and route guidance. The paper takes a signalized intersection as the research object. On the basis of traditional traffic wave theory including the implementation of traffic wave model and the analysis of vehicles’ gathering and dissipating, the road traffic impedance model is researched by determining the basic travel time and waiting delay time. Numerical example results have proved that the proposed model in this paper has received better calculation performance compared to existing model, especially in flat hours. The values of mean absolute percentage error (MAPE and mean absolute deviation (MAD are separately reduced by 3.78% and 2.62 s. It shows that the proposed model has feasibility and availability in road traffic impedance under intersection signal.

  12. Approximation Algorithms for the Highway Problem under the Coupon Model

    Science.gov (United States)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to decide the prices of items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy the items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we consider the line highway problem (in which each customer is interested in an interval on the line of the items) and the cycle highway problem (in which each customer is interested in an interval on the cycle of the items), and show approximation algorithms for the line highway problem and the cycle highway problem in which the smallest valuation is s and the largest valuation is l (this is called an [s, l]-valuation setting) or all valuations are identical (this is called a single valuation setting).

  13. A Dual System Model of Preferences under Risk

    Science.gov (United States)

    Mukherjee, Kanchan

    2010-01-01

    This article presents a dual system model (DSM) of decision making under risk and uncertainty according to which the value of a gamble is a combination of the values assigned to it independently by the affective and deliberative systems. On the basis of research on dual process theories and empirical research in Hsee and Rottenstreich (2004) and…

  14. Line and lattice networks under deterministic interference models

    NARCIS (Netherlands)

    Goseling, Jasper; Gastpar, Michael; Weber, Jos H.

    Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of

  15. Quantifying credit portfolio losses under multi-factor models

    NARCIS (Netherlands)

    G. Colldeforns-Papiol (Gemma); L. Ortiz Gracia (Luis); C.W. Oosterlee (Kees)

    2018-01-01

    textabstractIn this work, we investigate the challenging problem of estimating credit risk measures of portfolios with exposure concentration under the multi-factor Gaussian and multi-factor t-copula models. It is well-known that Monte Carlo (MC) methods are highly demanding from the computational

  16. Stochastic Online Learning in Dynamic Networks under Unknown Models

    Science.gov (United States)

    2016-08-02

    The key is to develop online learning strategies at each individual node. Specifically, through local information exchange with its neighbors, each...infinitely repeated game with incomplete information and developed a dynamic pricing strategy referred to as Competitive and Cooperative Demand Learning...Stochastic Online Learning in Dynamic Networks under Unknown Models This research aims to develop fundamental theories and practical algorithms for

  17. UNDER GRADUATE RESEARCH An alternative model of doing ...

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. UNDER GRADUATE RESEARCH An alternative model of doing science. The main work force is undergraduate students. Using research as a tool in education. Advantages : High risk tolerance. Infinite energy. Uninhibited lateral thinking. Problems: Japanese ...

  18. A flexible model for actuarial risks under dependence

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.; Lukocius, V.

    Methods for computing risk measures, such as stop-loss premiums, tacitly assume independence of the underlying individual risks. This can lead to huge errors even when only small dependencies occur. In the present paper, a general model is developed which covers what happens in practice in a

  19. A CHF Model in Narrow Gaps under Saturated Boiling

    International Nuclear Information System (INIS)

    Park, Suki; Kim, Hyeonil; Park, Cheol

    2014-01-01

    Many researchers have paid a great attention to the CHF in narrow gaps due to enormous industrial applications. Especially, a great number of researches on the CHF have been carried out in relation to nuclear safety issues such as in-vessel retention for nuclear power plants during a severe accident. Analytical studies to predict the CHF in narrow gaps have been also reported. Yu et al. (2012) developed an analytical model to predict the CHF on downward facing and inclined heaters based on the model of Kandlikar et al. (2001) for an upward facing heater. A new theoretical model is developed to predict the CHF in narrow gaps under saturated pool boiling. This model is applicable when one side of coolant channels or both sides are heated including the effects of heater orientation. The present model is compared with the experimental CHF data obtained in narrow gaps. A new analytical CHF model is proposed to predict CHF for narrow gaps under saturated pool boiling. This model can be applied to one-side or two-sides heating surface and also consider the effects of heater orientation on CHF. The present model is compared with the experimental data obtained in narrow gaps with one heater. The comparisons indicate that the present model shows a good agreement with the experimental CHF data in the horizontal annular tubes. However, it generally under-predicts the experimental data in the narrow rectangular gaps except the data obtained in the gap thickness of 10 mm and the horizontal downward facing heater

  20. Switching performance of OBS network model under prefetched real traffic

    Science.gov (United States)

    Huang, Zhenhua; Xu, Du; Lei, Wen

    2005-11-01

    Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.

  1. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  2. Verification of the karst flow model under laboratory controlled conditions

    Science.gov (United States)

    Gotovac, Hrvoje; Andric, Ivo; Malenica, Luka; Srzic, Veljko

    2016-04-01

    Karst aquifers are very important groundwater resources around the world as well as in coastal part of Croatia. They consist of extremely complex structure defining by slow and laminar porous medium and small fissures and usually fast turbulent conduits/karst channels. Except simple lumped hydrological models that ignore high karst heterogeneity, full hydraulic (distributive) models have been developed exclusively by conventional finite element and finite volume elements considering complete karst heterogeneity structure that improves our understanding of complex processes in karst. Groundwater flow modeling in complex karst aquifers are faced by many difficulties such as a lack of heterogeneity knowledge (especially conduits), resolution of different spatial/temporal scales, connectivity between matrix and conduits, setting of appropriate boundary conditions and many others. Particular problem of karst flow modeling is verification of distributive models under real aquifer conditions due to lack of above-mentioned information. Therefore, we will show here possibility to verify karst flow models under the laboratory controlled conditions. Special 3-D karst flow model (5.6*2.6*2 m) consists of concrete construction, rainfall platform, 74 piezometers, 2 reservoirs and other supply equipment. Model is filled by fine sand (3-D porous matrix) and drainage plastic pipes (1-D conduits). This model enables knowledge of full heterogeneity structure including position of different sand layers as well as conduits location and geometry. Moreover, we know geometry of conduits perforation that enable analysis of interaction between matrix and conduits. In addition, pressure and precipitation distribution and discharge flow rates from both phases can be measured very accurately. These possibilities are not present in real sites what this model makes much more useful for karst flow modeling. Many experiments were performed under different controlled conditions such as different

  3. A model for scheduling projects under the condition of inflation and under penalty and reward arrangements

    Directory of Open Access Journals (Sweden)

    J.K. Jolayemi

    2014-01-01

    Full Text Available A zero-one mixed integer linear programming model is developed for the scheduling of projects under the condition of inflation and under penalty and reward arrangements. The effects of inflation on time-cost trade-off curves are illustrated and a modified approach to time-cost trade-off analysis presented. Numerical examples are given to illustrate the model and its properties. The examples show that misleading schedules and inaccurate project-cost estimates will be produced if the inflation factor is neglected in an environment of high inflation. They also show that award of penalty or bonus is a catalyst for early completion of a project, just as it can be expected.

  4. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Directory of Open Access Journals (Sweden)

    Steven T Piantadosi

    2015-04-01

    Full Text Available Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage in choice, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are linearly separable into a psychologically plausible heuristic model (specifically, a dimensional prioritization heuristic that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice.

  5. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Science.gov (United States)

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  6. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    Science.gov (United States)

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  7. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    Directory of Open Access Journals (Sweden)

    Anne Chao

    Full Text Available Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1 unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2 these measures connect directly to the rich predictive mathematics of information theory; (3 Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4 Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation" between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  8. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  9. Misleading prioritizations from modelling range shifts under climate change

    Science.gov (United States)

    Sofaer, Helen R.; Jarnevich, Catherine S.; Flather, Curtis H.

    2018-01-01

    AimConservation planning requires the prioritization of a subset of taxa and geographical locations to focus monitoring and management efforts. Integration of the threats and opportunities posed by climate change often relies on predictions from species distribution models, particularly for assessments of vulnerability or invasion risk for multiple taxa. We evaluated whether species distribution models could reliably rank changes in species range size under climate and land use change.LocationConterminous U.S.A.Time period1977–2014.Major taxa studiedPasserine birds.MethodsWe estimated ensembles of species distribution models based on historical North American Breeding Bird Survey occurrences for 190 songbirds, and generated predictions to recent years given c. 35 years of observed land use and climate change. We evaluated model predictions using standard metrics of discrimination performance and a more detailed assessment of the ability of models to rank species vulnerability to climate change based on predicted range loss, range gain, and overall change in range size.ResultsSpecies distribution models yielded unreliable and misleading assessments of relative vulnerability to climate and land use change. Models could not accurately predict range expansion or contraction, and therefore failed to anticipate patterns of range change among species. These failures occurred despite excellent overall discrimination ability and transferability to the validation time period, which reflected strong performance at the majority of locations that were either always or never occupied by each species.Main conclusionsModels failed for the questions and at the locations of greatest interest to conservation and management. This highlights potential pitfalls of multi-taxa impact assessments under global change; in our case, models provided misleading rankings of the most impacted species, and spatial information about range changes was not credible. As modelling methods and

  10. A constitutive framework for modelling thin incompressible viscoelastic materials under plane stress in the finite strain regime

    Science.gov (United States)

    Kroon, M.

    2011-11-01

    Rubbers and soft biological tissues may undergo large deformations and are also viscoelastic. The formulation of constitutive models for these materials poses special challenges. In several applications, especially in biomechanics, these materials are also relatively thin, implying that in-plane stresses dominate and that plane stress may therefore be assumed. In the present paper, a constitutive model for viscoelastic materials in the finite strain regime and under the assumption of plane stress is proposed. It is assumed that the relaxation behaviour in the direction of plane stress can be treated separately, which makes it possible to formulate evolution laws for the plastic strains on explicit form at the same time as incompressibility is fulfilled. Experimental results from biomechanics (dynamic inflation of dog aorta) and rubber mechanics (biaxial stretching of rubber sheets) were used to assess the proposed model. The assessment clearly indicates that the model is fully able to predict the experimental outcome for these types of material.

  11. Assumptions and Challenges of Open Scholarship

    Directory of Open Access Journals (Sweden)

    George Veletsianos

    2012-10-01

    Full Text Available Researchers, educators, policymakers, and other education stakeholders hope and anticipate that openness and open scholarship will generate positive outcomes for education and scholarship. Given the emerging nature of open practices, educators and scholars are finding themselves in a position in which they can shape and/or be shaped by openness. The intention of this paper is (a to identify the assumptions of the open scholarship movement and (b to highlight challenges associated with the movement’s aspirations of broadening access to education and knowledge. Through a critique of technology use in education, an understanding of educational technology narratives and their unfulfilled potential, and an appreciation of the negotiated implementation of technology use, we hope that this paper helps spark a conversation for a more critical, equitable, and effective future for education and open scholarship.

  12. Challenging the assumptions for thermal sensation scales

    DEFF Research Database (Denmark)

    Schweiker, Marcel; Fuchs, Xaver; Becker, Susanne

    2016-01-01

    Scales are widely used to assess the personal experience of thermal conditions in built environments. Most commonly, thermal sensation is assessed, mainly to determine whether a particular thermal condition is comfortable for individuals. A seven-point thermal sensation scale has been used...... extensively, which is suitable for describing a one-dimensional relationship between physical parameters of indoor environments and subjective thermal sensation. However, human thermal comfort is not merely a physiological but also a psychological phenomenon. Thus, it should be investigated how scales for its...... assessment could benefit from a multidimensional conceptualization. The common assumptions related to the usage of thermal sensation scales are challenged, empirically supported by two analyses. These analyses show that the relationship between temperature and subjective thermal sensation is non...

  13. The incompressibility assumption in computational simulations of nasal airflow.

    Science.gov (United States)

    Cal, Ismael R; Cercos-Pita, Jose Luis; Duque, Daniel

    2017-06-01

    Most of the computational works on nasal airflow up to date have assumed incompressibility, given the low Mach number of these flows. However, for high temperature gradients, the incompressibility assumption could lead to a loss of accuracy, due to the temperature dependence of air density and viscosity. In this article we aim to shed some light on the influence of this assumption in a model of calm breathing in an Asian nasal cavity, by solving the fluid flow equations in compressible and incompressible formulation for different ambient air temperatures using the OpenFOAM package. At low flow rates and warm climatological conditions, similar results were obtained from both approaches, showing that density variations need not be taken into account to obtain a good prediction of all flow features, at least for usual breathing conditions. This agrees with most of the simulations previously reported, at least as far as the incompressibility assumption is concerned. However, parameters like nasal resistance and wall shear stress distribution differ for air temperatures below [Formula: see text]C approximately. Therefore, density variations should be considered for simulations at such low temperatures.

  14. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    Science.gov (United States)

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  15. A dual system model of preferences under risk.

    Science.gov (United States)

    Mukherjee, Kanchan

    2010-01-01

    This article presents a dual system model (DSM) of decision making under risk and uncertainty according to which the value of a gamble is a combination of the values assigned to it independently by the affective and deliberative systems. On the basis of research on dual process theories and empirical research in Hsee and Rottenstreich (2004) and Rottenstreich and Hsee (2001) among others, the DSM incorporates (a) individual differences in disposition to rational versus emotional decision making, (b) the affective nature of outcomes, and (c) different task construals within its framework. The model has good descriptive validity and accounts for (a) violation of nontransparent stochastic dominance, (b) fourfold pattern of risk attitudes, (c) ambiguity aversion, (d) common consequence effect, (e) common ratio effect, (f) isolation effect, and (g) coalescing and event-splitting effects. The DSM is also used to make several novel predictions of conditions under which specific behavior patterns may or may not occur.

  16. Mathematical modelling of unglazed solar collectors under extreme operating conditions

    DEFF Research Database (Denmark)

    Bunea, M.; Perers, Bengt; Eicher, S.

    2015-01-01

    average temperature levels at the evaporator. Simulation of these systems requires a collector model that can take into account operation at very low temperatures (below freezing) and under various weather conditions, particularly operation without solar irradiation.A solar collector mathematical model......Combined heat pumps and solar collectors got a renewed interest on the heating system market worldwide. Connected to the heat pump evaporator, unglazed solar collectors can considerably increase their efficiency, but they also raise the coefficient of performance of the heat pump with higher...... was found due to the condensation phenomenon and up to 40% due to frost under no solar irradiation. This work also points out the influence of the operating conditions on the collector's characteristics.Based on experiments carried out at a test facility, every heat flux on the absorber was separately...

  17. Grain breakage under uniaxial compression, through 3D DEM modelling

    Directory of Open Access Journals (Sweden)

    Nader François

    2017-01-01

    Full Text Available A breakable grain model is presented, using the concept of particles assembly. Grains of polyhedral shapes are generated, formed by joining together tetrahedral subgrains using cohesive bonds. Single grain crushing simulations are performed for multiple values of the intra-granular cohesion to study the effect on the grain’s strength. The same effect of intra-granular cohesion is studied under oedometric compression on samples of around 800 grains, which allows the evaluation of grain breakage model on the macroscopic behaviour. Grain size distribution curves and grain breakage ratios are monitored throughout the simulations.

  18. A sliding mode observer for hemodynamic characterization under modeling uncertainties

    KAUST Repository

    Zayane, Chadia

    2014-06-01

    This paper addresses the case of physiological states reconstruction in a small region of the brain under modeling uncertainties. The misunderstood coupling between the cerebral blood volume and the oxygen extraction fraction has lead to a partial knowledge of the so-called balloon model describing the hemodynamic behavior of the brain. To overcome this difficulty, a High Order Sliding Mode observer is applied to the balloon system, where the unknown coupling is considered as an internal perturbation. The effectiveness of the proposed method is illustrated through a set of synthetic data that mimic fMRI experiments.

  19. Model analyses for sustainable energy supply under CO2 restrictions

    International Nuclear Information System (INIS)

    Matsuhashi, Ryuji; Ishitani, Hisashi.

    1995-01-01

    This paper aims at clarifying key points for realizing sustainable energy supply under restrictions on CO 2 emissions. For this purpose, possibility of solar breeding system is investigated as a key technology for the sustainable energy supply. The authors describe their mathematical model simulating global energy supply and demand in ultra-long term. Depletion of non-renewable resources and constraints on CO 2 emissions are taken into consideration in the model. Computed results have shown that present energy system based on non-renewable resources shifts to a system based on renewable resources in the ultra-long term with appropriate incentives

  20. Consequences of the genetic threshold model for observing partial migration under climate change scenarios.

    Science.gov (United States)

    Cobben, Marleen M P; van Noordwijk, Arie J

    2017-10-01

    Migration is a widespread phenomenon across the animal kingdom as a response to seasonality in environmental conditions. Partially migratory populations are populations that consist of both migratory and residential individuals. Such populations are very common, yet their stability has long been debated. The inheritance of migratory activity is currently best described by the threshold model of quantitative genetics. The inclusion of such a genetic threshold model for migratory behavior leads to a stable zone in time and space of partially migratory populations under a wide range of demographic parameter values, when assuming stable environmental conditions and unlimited genetic diversity. Migratory species are expected to be particularly sensitive to global warming, as arrival at the breeding grounds might be increasingly mistimed as a result of the uncoupling of long-used cues and actual environmental conditions, with decreasing reproduction as a consequence. Here, we investigate the consequences for migratory behavior and the stability of partially migratory populations under five climate change scenarios and the assumption of a genetic threshold value for migratory behavior in an individual-based model. The results show a spatially and temporally stable zone of partially migratory populations after different lengths of time in all scenarios. In the scenarios in which the species expands its range from a particular set of starting populations, the genetic diversity and location at initialization determine the species' colonization speed across the zone of partial migration and therefore across the entire landscape. Abruptly changing environmental conditions after model initialization never caused a qualitative change in phenotype distributions, or complete extinction. This suggests that climate change-induced shifts in species' ranges as well as changes in survival probabilities and reproductive success can be met with flexibility in migratory behavior at the

  1. The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model

    Science.gov (United States)

    Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan

    2016-05-01

    Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.

  2. Climate Change: Implications for the Assumptions, Goals and Methods of Urban Environmental Planning

    Directory of Open Access Journals (Sweden)

    Kristina Hill

    2016-12-01

    -based quantitative models of regional system behavior that may soon be used to determine acceptable land uses. Finally, the philosophical assumptions that underlie urban environmental planning are changing to address new epistemological, ontological and ethical assumptions that support new methods and goals. The inability to use the past as a guide to the future, new prioritizations of values for adaptation, and renewed efforts to focus on intergenerational justice are provided as examples. In order to represent a genuine paradigm shift, this review argues that changes must begin to be evident across the underlying assumptions, conceptual frameworks, and methods of urban environmental planning, and be attributable to the same root cause. The examples presented here represent the early stages of a change in the overall paradigm of the discipline.

  3. On the renewal risk model under a threshold strategy

    Science.gov (United States)

    Dong, Yinghui; Wang, Guojing; Yuen, Kam C.

    2009-08-01

    In this paper, we consider the renewal risk process under a threshold dividend payment strategy. For this model, the expected discounted dividend payments and the Gerber-Shiu expected discounted penalty function are investigated. Integral equations, integro-differential equations and some closed form expressions for them are derived. When the claims are exponentially distributed, it is verified that the expected penalty of the deficit at ruin is proportional to the ruin probability.

  4. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Science.gov (United States)

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  5. Modeling human behaviors and reactions under dangerous environment.

    Science.gov (United States)

    Kang, J; Wright, D K; Qin, S F; Zhao, Y

    2005-01-01

    This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.

  6. The transferability of hydrological models under nonstationary climatic conditions

    Directory of Open Access Journals (Sweden)

    C. Z. Li

    2012-04-01

    Full Text Available This paper investigates issues involved in calibrating hydrological models against observed data when the aim of the modelling is to predict future runoff under different climatic conditions. To achieve this objective, we tested two hydrological models, DWBM and SIMHYD, using data from 30 unimpaired catchments in Australia which had at least 60 yr of daily precipitation, potential evapotranspiration (PET, and streamflow data. Nash-Sutcliffe efficiency (NSE, modified index of agreement (d1 and water balance error (WBE were used as performance criteria. We used a differential split-sample test to split up the data into 120 sub-periods and 4 different climatic sub-periods in order to assess how well the calibrated model could be transferred different periods. For each catchment, the models were calibrated for one sub-period and validated on the other three. Monte Carlo simulation was used to explore parameter stability compared to historic climatic variability. The chi-square test was used to measure the relationship between the distribution of the parameters and hydroclimatic variability. The results showed that the performance of the two hydrological models differed and depended on the model calibration. We found that if a hydrological model is set up to simulate runoff for a wet climate scenario then it should be calibrated on a wet segment of the historic record, and similarly a dry segment should be used for a dry climate scenario. The Monte Carlo simulation provides an effective and pragmatic approach to explore uncertainty and equifinality in hydrological model parameters. Some parameters of the hydrological models are shown to be significantly more sensitive to the choice of calibration periods. Our findings support the idea that when using conceptual hydrological models to assess future climate change impacts, a differential split-sample test and Monte Carlo simulation should be used to quantify uncertainties due to

  7. Modelling climate impact on floods under future emission scenarios using an ensemble of climate model projections

    Science.gov (United States)

    Wetterhall, F.; Cloke, H. L.; He, Y.; Freer, J.; Pappenberger, F.

    2012-04-01

    Evidence provided by modelled assessments of climate change impact on flooding is fundamental to water resource and flood risk decision making. Impact models usually rely on climate projections from Global and Regional Climate Models, and there is no doubt that these provide a useful assessment of future climate change. However, cascading ensembles of climate projections into impact models is not straightforward because of problems of coarse resolution in Global and Regional Climate Models (GCM/RCM) and the deficiencies in modelling high-intensity precipitation events. Thus decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs, such as selection of downscaling methods and application of Model Output Statistics (MOS). In this paper a grand ensemble of projections from several GCM/RCM are used to drive a hydrological model and analyse the resulting future flood projections for the Upper Severn, UK. The impact and implications of applying MOS techniques to precipitation as well as hydrological model parameter uncertainty is taken into account. The resultant grand ensemble of future river discharge projections from the RCM/GCM-hydrological model chain is evaluated against a response surface technique combined with a perturbed physics experiment creating a probabilisic ensemble climate model outputs. The ensemble distribution of results show that future risk of flooding in the Upper Severn increases compared to present conditions, however, the study highlights that the uncertainties are large and that strong assumptions were made in using Model Output Statistics to produce the estimates of future discharge. The importance of analysing on a seasonal basis rather than just annual is highlighted. The inability of the RCMs (and GCMs) to produce realistic precipitation patterns, even in present conditions, is a major caveat of local climate impact studies on flooding, and this should be a focus for future development.

  8. Agricultural livelihoods in coastal Bangladesh under climate and environmental change--a model framework.

    Science.gov (United States)

    Lázár, Attila N; Clarke, Derek; Adams, Helen; Akanda, Abdur Razzaque; Szabo, Sylvia; Nicholls, Robert J; Matthews, Zoe; Begum, Dilruba; Saleh, Abul Fazal M; Abedin, Md Anwarul; Payo, Andres; Streatfield, Peter Kim; Hutton, Craig; Mondal, M Shahjahan; Moslehuddin, Abu Zofar Md

    2015-06-01

    Coastal Bangladesh experiences significant poverty and hazards today and is highly vulnerable to climate and environmental change over the coming decades. Coastal stakeholders are demanding information to assist in the decision making processes, including simulation models to explore how different interventions, under different plausible future socio-economic and environmental scenarios, could alleviate environmental risks and promote development. Many existing simulation models neglect the complex interdependencies between the socio-economic and environmental system of coastal Bangladesh. Here an integrated approach has been proposed to develop a simulation model to support agriculture and poverty-based analysis and decision-making in coastal Bangladesh. In particular, we show how a simulation model of farmer's livelihoods at the household level can be achieved. An extended version of the FAO's CROPWAT agriculture model has been integrated with a downscaled regional demography model to simulate net agriculture profit. This is used together with a household income-expenses balance and a loans logical tree to simulate the evolution of food security indicators and poverty levels. Modelling identifies salinity and temperature stress as limiting factors to crop productivity and fertilisation due to atmospheric carbon dioxide concentrations as a reinforcing factor. The crop simulation results compare well with expected outcomes but also reveal some unexpected behaviours. For example, under current model assumptions, temperature is more important than salinity for crop production. The agriculture-based livelihood and poverty simulations highlight the critical significance of debt through informal and formal loans set at such levels as to persistently undermine the well-being of agriculture-dependent households. Simulations also indicate that progressive approaches to agriculture (i.e. diversification) might not provide the clear economic benefit from the perspective of

  9. Rooting phylogenetic trees under the coalescent model using site pattern probabilities.

    Science.gov (United States)

    Tian, Yuan; Kubatko, Laura

    2017-12-19

    Phylogenetic tree inference is a fundamental tool to estimate ancestor-descendant relationships among different species. In phylogenetic studies, identification of the root - the most recent common ancestor of all sampled organisms - is essential for complete understanding of the evolutionary relationships. Rooted trees benefit most downstream application of phylogenies such as species classification or study of adaptation. Often, trees can be rooted by using outgroups, which are species that are known to be more distantly related to the sampled organisms than any other species in the phylogeny. However, outgroups are not always available in evolutionary research. In this study, we develop a new method for rooting species tree under the coalescent model, by developing a series of hypothesis tests for rooting quartet phylogenies using site pattern probabilities. The power of this method is examined by simulation studies and by application to an empirical North American rattlesnake data set. The method shows high accuracy across the simulation conditions considered, and performs well for the rattlesnake data. Thus, it provides a computationally efficient way to accurately root species-level phylogenies that incorporates the coalescent process. The method is robust to variation in substitution model, but is sensitive to the assumption of a molecular clock. Our study establishes a computationally practical method for rooting species trees that is more efficient than traditional methods. The method will benefit numerous evolutionary studies that require rooting a phylogenetic tree without having to specify outgroups.

  10. Electricity pricing model in thermal generating stations under deregulation

    International Nuclear Information System (INIS)

    Reji, P.; Ashok, S.; Moideenkutty, K.M.

    2007-01-01

    In regulated public utilities with competitive power markets, deregulation has replaced the monopoly. Under the deregulated power market, the electricity price primarily depends on market mechanism and power demand. In this market, generators generally follow marginal pricing. Each generator fixes the electricity price based on their pricing strategy and it leads to more price volatility. This paper proposed a model to determine the electricity price considering all operational constraints of the plant and economic variables that influenced the price, for a thermal generating station under deregulation. The purpose of the model was to assist existing stations, investors in the power sector, regulatory authorities, transmission utilities, and new power generators in decision-making. The model could accommodate price volatility in the market and was based on performance incentive/penalty considering plant load factor, availability of the plant and peak/ off peak demand. The model was applied as a case study to a typical thermal utility in India to determine the electricity price. It was concluded that the case study of a thermal generating station in a deregulated environment showed that the electricity price mainly depended on the gross calorific value (GCV) of fuel, mode of operation, price of the fuel, and operating charges. 11 refs., 2 tabs., 1 fig

  11. Modeling cascading failures in interdependent infrastructures under terrorist attacks

    International Nuclear Information System (INIS)

    Wu, Baichao; Tang, Aiping; Wu, Jie

    2016-01-01

    An attack strength degradation model has been introduced to further capture the interdependencies among infrastructures and model cascading failures across infrastructures when terrorist attacks occur. A medium-sized energy system including oil network and power network is selected for exploring the vulnerabilities from independent networks to interdependent networks, considering the structural vulnerability and the functional vulnerability. Two types of interdependencies among critical infrastructures are involved in this paper: physical interdependencies and geographical interdependencies, shown by tunable parameters based on the probabilities of failures of nodes in the networks. In this paper, a tolerance parameter α is used to evaluation of the overloads of the substations based on power flow redistribution in power transmission systems under the attack. The results of simulation show that the independent networks or interdependent networks will be collapsed when only a small fraction of nodes are attacked under the attack strength degradation model, especially for the interdependent networks. The methodology introduced in this paper with physical interdependencies and geographical interdependencies involved in can be applied to analyze the vulnerability of the interdependent infrastructures further, and provides the insights of vulnerability of interdependent infrastructures to mitigation actions for critical infrastructure protections. - Highlights: • An attack strength degradation model based on the specified locations has been introduced. • Interdependencies considering both physical and geographical have been analyzed. • The structural vulnerability and the functional vulnerability have been considered.

  12. Modeling thermophysical properties of food under high pressure.

    Science.gov (United States)

    Otero, L; Guignon, B; Aparicio, C; Sanz, P D

    2010-04-01

    A set of well-known generic models to predict the thermophysical properties of food from its composition at atmospheric conditions was adapted to work at any pressure. The suitability of the models was assessed using data from the literature for four different food products, namely tomato paste, potato, pork, and cod. When the composition of the product considered was not known, an alternative was proposed if some thermal data at atmospheric conditions were available. Since knowledge on the initial freezing point and ice content of food are essential for the correct prediction of its thermal properties, models for obtaining these properties under pressure were also included. Our results showed that good predictions under pressure, accurate enough for most engineering calculations can be made, either from composition data or using known thermal data of the food considered at atmospheric conditions. All the equations and coefficients needed to construct the models are given throughout the text, thus readers can compose their own routines. However, these routines can also be downloaded free at http://www.if.csic.es/programas/ifiform.htm as executable programs running in Windows.

  13. A quantitative evaluation of a qualitative risk assessment framework: Examining the assumptions and predictions of the Productivity Susceptibility Analysis (PSA)

    Science.gov (United States)

    2018-01-01

    Qualitative risk assessment frameworks, such as the Productivity Susceptibility Analysis (PSA), have been developed to rapidly evaluate the risks of fishing to marine populations and prioritize management and research among species. Despite being applied to over 1,000 fish populations, and an ongoing debate about the most appropriate method to convert biological and fishery characteristics into an overall measure of risk, the assumptions and predictive capacity of these approaches have not been evaluated. Several interpretations of the PSA were mapped to a conventional age-structured fisheries dynamics model to evaluate the performance of the approach under a range of assumptions regarding exploitation rates and measures of biological risk. The results demonstrate that the underlying assumptions of these qualitative risk-based approaches are inappropriate, and the expected performance is poor for a wide range of conditions. The information required to score a fishery using a PSA-type approach is comparable to that required to populate an operating model and evaluating the population dynamics within a simulation framework. In addition to providing a more credible characterization of complex system dynamics, the operating model approach is transparent, reproducible and can evaluate alternative management strategies over a range of plausible hypotheses for the system. PMID:29856869

  14. Modeling Malaria Vector Distribution under Climate Change Scenarios in Kenya

    Science.gov (United States)

    Ngaina, J. N.

    2017-12-01

    Projecting the distribution of malaria vectors under climate change is essential for planning integrated vector control strategies for sustaining elimination and preventing reintroduction of malaria. However, in Kenya, little knowledge exists on the possible effects of climate change on malaria vectors. Here we assess the potential impact of future climate change on locally dominant Anopheles vectors including Anopheles gambiae, Anopheles arabiensis, Anopheles merus, Anopheles funestus, Anopheles pharoensis and Anopheles nili. Environmental data (Climate, Land cover and elevation) and primary empirical geo-located species-presence data were identified. The principle of maximum entropy (Maxent) was used to model the species' potential distribution area under paleoclimate, current and future climates. The Maxent model was highly accurate with a statistically significant AUC value. Simulation-based estimates suggest that the environmentally suitable area (ESA) for Anopheles gambiae, An. arabiensis, An. funestus and An. pharoensis would increase under all two scenarios for mid-century (2016-2045), but decrease for end century (2071-2100). An increase in ESA of An. Funestus was estimated under medium stabilizing (RCP4.5) and very heavy (RCP8.5) emission scenarios for mid-century. Our findings can be applied in various ways such as the identification of additional localities where Anopheles malaria vectors may already exist, but has not yet been detected and the recognition of localities where it is likely to spread to. Moreover, it will help guide future sampling location decisions, help with the planning of vector control suites nationally and encourage broader research inquiry into vector species niche modeling

  15. Incentive salience attribution under reward uncertainty: A Pavlovian model.

    Science.gov (United States)

    Anselme, Patrick

    2015-02-01

    There is a vast literature on the behavioural effects of partial reinforcement in Pavlovian conditioning. Compared with animals receiving continuous reinforcement, partially rewarded animals typically show (a) a slower development of the conditioned response (CR) early in training and (b) a higher asymptotic level of the CR later in training. This phenomenon is known as the partial reinforcement acquisition effect (PRAE). Learning models of Pavlovian conditioning fail to account for it. In accordance with the incentive salience hypothesis, it is here argued that incentive motivation (or 'wanting') plays a more direct role in controlling behaviour than does learning, and reward uncertainty is shown to have an excitatory effect on incentive motivation. The psychological origin of that effect is discussed and a computational model integrating this new interpretation is developed. Many features of CRs under partial reinforcement emerge from this model. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Computational modeling for hexcan failure under core distruptive accidental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sawada, T.; Ninokata, H.; Shimizu, A. [Tokyo Institute of Technology (Japan)

    1995-09-01

    This paper describes the development of computational modeling for hexcan wall failures under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH has been analyzed by using the SIMMER-II code. The SIMBATH experiments were performed at KfK in Germany. The experiments used a thermite mixture to simulate fuel. The test geometry of SIMBATH ranged from single pin to 37-pin bundles. In this study, phenomena of hexcan wall failure found in a SIMBATH test were analyzed by SIMMER-II. Although the original model of SIMMER-II did not calculate any hexcan failure, several simple modifications made it possible to reproduce the hexcan wall melt-through observed in the experiment. In this paper the modifications and their significance are discussed for further modeling improvements.

  17. Assessment of interfacial heat transfer models under subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, Guilherme B.; Braz Filho, Francisco A., E-mail: gbribeiro@ieav.cta.br, E-mail: fbraz@ieav.cta.br [Instituto de Estudos Avançados (DCTA/IEAv), São José dos Campos, SP (Brazil). Div. de Energia Nuclear

    2017-07-01

    The present study concerns a detailed analysis of subcooled flow boiling characteristics under high pressure systems using a two-fluid Eulerian approach provided by a Computational Fluid Dynamics (CFD) solver. For this purpose, a vertical heated pipe made of stainless steel with an internal diameter of 15.4 mm was considered as the modeled domain. An uniform heat flux of 570 kW/m2 and saturation pressure of 4.5 MPa were applied to the channel wall, whereas water mass flux of 900 kg/m2s was considered for all simulation cases. The model was validated against a set of experimental data and results have indicated a promising use of CFD technique for the estimation of wall temperature, the liquid bulk temperature and the location of the departure of nucleate boiling. Different sub-models of interfacial heat transfer coefficient were applied and compared, allowing a better prediction of void fraction along the heated channel. (author)

  18. Modelling magnetic laminations under arbitrary starting state and flux waveform

    International Nuclear Information System (INIS)

    Bottauscio, Oriano; Chiampi, Mario; Ragusa, Carlo

    2005-01-01

    A numerical model able to predict the behaviour of a magnetic sheet under arbitrary supply conditions has been developed. The electromagnetic field problem is formulated in terms of an electric vector potential, which provides the magnetic field strength evolution. The hysteretic behaviour of the material is represented through the dynamic Preisach model where the activation law of the bi-state operators is modified in order to guarantee a smooth response. The problem has been solved through a time step procedure using the fixed Point technique for handling nonlinearity. The model has been validated by comparison with suitable experiments and it is applied to the investigation of the influence of the materials' starting state on the magnetic behaviour

  19. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  20. Modeling the Propagation of Mobile Phone Virus under Complex Network

    Science.gov (United States)

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  1. Outdoor FSO Communications Under Fog: Attenuation Modeling and Performance Evaluation

    KAUST Repository

    Esmail, Maged Abdullah

    2016-07-18

    Fog is considered to be a primary challenge for free space optics (FSO) systems. It may cause attenuation that is up to hundreds of decibels per kilometer. Hence, accurate modeling of fog attenuation will help telecommunication operators to engineer and appropriately manage their networks. In this paper, we examine fog measurement data coming from several locations in Europe and the United States and derive a unified channel attenuation model. Compared with existing attenuation models, our proposed model achieves a minimum of 9 dB, which is lower than the average root-mean-square error (RMSE). Moreover, we have investigated the statistical behavior of the channel and developed a probabilistic model under stochastic fog conditions. Furthermore, we studied the performance of the FSO system addressing various performance metrics, including signal-to-noise ratio (SNR), bit-error rate (BER), and channel capacity. Our results show that in communication environments with frequent fog, FSO is typically a short-range data transmission technology. Therefore, FSO will have its preferred market segment in future wireless fifth-generation/sixth-generation (5G/6G) networks having cell sizes that are lower than a 1-km diameter. Moreover, the results of our modeling and analysis can be applied in determining the switching/thresholding conditions in highly reliable hybrid FSO/radio-frequency (RF) networks.

  2. Towards New Probabilistic Assumptions in Business Intelligence

    Directory of Open Access Journals (Sweden)

    Schumann Andrew

    2015-01-01

    Full Text Available One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot be observable and additive in principle. These variables can be called symbolic values or symbolic meanings and studied within symbolic interactionism, the theory developed since George Herbert Mead and Herbert Blumer. In statistical and econometric tools of business intelligence we accept only phenomena with causal connections measured by additive measures. In the paper we show that in the social world we deal with symbolic interactions which can be studied by non-additive labels (symbolic meanings or symbolic values. For accepting the variety of such phenomena we should avoid additivity of basic labels and construct a new probabilistic method in business intelligence based on non-Archimedean probabilities.

  3. ψ -ontology result without the Cartesian product assumption

    Science.gov (United States)

    Myrvold, Wayne C.

    2018-05-01

    We introduce a weakening of the preparation independence postulate of Pusey et al. [Nat. Phys. 8, 475 (2012), 10.1038/nphys2309] that does not presuppose that the space of ontic states resulting from a product-state preparation can be represented by the Cartesian product of subsystem state spaces. On the basis of this weakened assumption, it is shown that, in any model that reproduces the quantum probabilities, any pair of pure quantum states |ψ >,|ϕ > with ≤1 /√{2 } must be ontologically distinct.

  4. Polarized BRDF for coatings based on three-component assumption

    Science.gov (United States)

    Liu, Hong; Zhu, Jingping; Wang, Kai; Xu, Rong

    2017-02-01

    A pBRDF(polarized bidirectional reflection distribution function) model for coatings is given based on three-component reflection assumption in order to improve the polarized scattering simulation capability for space objects. In this model, the specular reflection is given based on microfacet theory, the multiple reflection and volume scattering are given separately according to experimental results. The polarization of specular reflection is considered from Fresnel's law, and both multiple reflection and volume scattering are assumed depolarized. Simulation and measurement results of two satellite coating samples SR107 and S781 are given to validate that the pBRDF modeling accuracy can be significantly improved by the three-component model given in this paper.

  5. Thermal modelling of PV module performance under high ambient temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Diarra, D.C.; Harrison, S.J. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering Solar Calorimetry Lab; Akuffo, F.O. [Kwame Nkrumah Univ. of Science and Technology, Kumasi (Ghana). Dept. of Mechanical Engineering

    2005-07-01

    When predicting the performance of photovoltaic (PV) generators, the actual performance is typically lower than test results conducted under standard test conditions because the radiant energy absorbed in the module under normal operation raises the temperature of the cell and other multilayer components. The increase in temperature translates to a lower conversion efficiency of the solar cells. In order to address these discrepancies, a thermal model of a characteristic PV module was developed to assess and predict its performance under real field-conditions. The PV module consisted of monocrystalline silicon cells in EVA between a glass cover and a tedlar backing sheet. The EES program was used to compute the equilibrium temperature profile in the PV module. It was shown that heat is dissipated towards the bottom and the top of the module, and that its temperature can be much higher than the ambient temperature. Modelling results indicate that 70-75 per cent of the absorbed solar radiation is dissipated from the solar cells as heat, while 4.7 per cent of the solar energy is absorbed in the glass cover and the EVA. It was also shown that the operating temperature of the PV module decreases with increased wind speed. 2 refs.

  6. Simulation modelling of a patient surge in an emergency department under disaster conditions

    Directory of Open Access Journals (Sweden)

    Muhammet Gul

    2015-10-01

    Full Text Available The efficiency of emergency departments (EDs in handling patient surges during disaster times using the available resources is very important. Many EDs require additional resources to overcome the bottlenecks in emergency systems. The assumption is that EDs consider the option of temporary staff dispatching, among other options, in order to respond to an increased demand or even the hiring temporarily non-hospital medical staff. Discrete event simulation (DES, a well-known simulation method and based on the idea of process modeling, is used for establishing ED operations and management related models. In this study, a DES model is developed to investigate and analyze an ED under normal conditions and an ED in a disaster scenario which takes into consideration an increased influx of disaster victims-patients. This will allow early preparedness of emergency departments in terms of physical and human resources. The studied ED is located in an earthquake zone in Istanbul. The report on Istanbul’s disaster preparedness presented by the Japan International Cooperation Agency (JICA and Istanbul Metropolitan Municipality (IMM, asserts that the district where the ED is located is estimated to have the highest injury rate. Based on real case study information, the study aims to suggest a model on pre-planning of ED resources for disasters. The results indicate that in times of a possible disaster, when the percentage of red patient arrivals exceeds 20% of total patient arrivals, the number of red area nurses and the available space for red area patients will be insufficient for the department to operate effectively. A methodological improvement presented a different distribution function that was tested for service time of the treatment areas. The conclusion is that the Weibull distribution function used in service process of injection room fits the model better than the Gamma distribution function.

  7. Modeling Bird Migration under Climate Change: A Mechanistic Approach

    Science.gov (United States)

    Smith, James A.

    2009-01-01

    How will migrating birds respond to changes in the environment under climate change? What are the implications for migratory success under the various accelerated climate change scenarios as forecast by the Intergovernmental Panel on Climate Change? How will reductions or increased variability in the number or quality of wetland stop-over sites affect migratory bird species? The answers to these questions have important ramifications for conservation biology and wildlife management. Here, we describe the use of continental scale simulation modeling to explore how spatio-temporal changes along migratory flyways affect en-route migration success. We use an individually based, biophysical, mechanistic, bird migration model to simulate the movement of shorebirds in North America as a tool to study how such factors as drought and wetland loss may impact migratory success and modify migration patterns. Our model is driven by remote sensing and climate data and incorporates important landscape variables. The energy budget components of the model include resting, foraging, and flight, but presently predation is ignored. Results/Conclusions We illustrate our model by studying the spring migration of sandpipers through the Great Plains to their Arctic breeding grounds. Why many species of shorebirds have shown significant declines remains a puzzle. Shorebirds are sensitive to stop-over quality and spacing because of their need for frequent refueling stops and their opportunistic feeding patterns. We predict bird "hydrographs that is, stop-over frequency with latitude, that are in agreement with the literature. Mean stop-over durations predicted from our model for nominal cases also are consistent with the limited, but available data. For the shorebird species simulated, our model predicts that shorebirds exhibit significant plasticity and are able to shift their migration patterns in response to changing drought conditions. However, the question remains as to whether this

  8. Modeling non-monotonic properties under propositional argumentation

    Science.gov (United States)

    Wang, Geng; Lin, Zuoquan

    2013-03-01

    In the field of knowledge representation, argumentation is usually considered as an abstract framework for nonclassical logic. In this paper, however, we'd like to present a propositional argumentation framework, which can be used to closer simulate a real-world argumentation. We thereby argue that under a dialectical argumentation game, we can allow non-monotonic reasoning even under classical logic. We introduce two methods together for gaining nonmonotonicity, one by giving plausibility for arguments, the other by adding "exceptions" which is similar to defaults. Furthermore, we will give out an alternative definition for propositional argumentation using argumentative models, which is highly related to the previous reasoning method, but with a simple algorithm for calculation.

  9. Mechanisms Underlying Mammalian Hybrid Sterility in Two Feline Interspecies Models.

    Science.gov (United States)

    Davis, Brian W; Seabury, Christopher M; Brashear, Wesley A; Li, Gang; Roelke-Parker, Melody; Murphy, William J

    2015-10-01

    The phenomenon of male sterility in interspecies hybrids has been observed for over a century, however, few genes influencing this recurrent phenotype have been identified. Genetic investigations have been primarily limited to a small number of model organisms, thus limiting our understanding of the underlying molecular basis of this well-documented "rule of speciation." We utilized two interspecies hybrid cat breeds in a genome-wide association study employing the Illumina 63 K single-nucleotide polymorphism array. Collectively, we identified eight autosomal genes/gene regions underlying associations with hybrid male sterility (HMS) involved in the function of the blood-testis barrier, gamete structural development, and transcriptional regulation. We also identified several candidate hybrid sterility regions on the X chromosome, with most residing in close proximity to complex duplicated regions. Differential gene expression analyses revealed significant chromosome-wide upregulation of X chromosome transcripts in testes of sterile hybrids, which were enriched for genes involved in chromatin regulation of gene expression. Our expression results parallel those reported in Mus hybrids, supporting the "Large X-Effect" in mammalian HMS and the potential epigenetic basis for this phenomenon. These results support the value of the interspecies feline model as a powerful tool for comparison to rodent models of HMS, demonstrating unique aspects and potential commonalities that underpin mammalian reproductive isolation. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Robustness for slope stability modelling under deep uncertainty

    Science.gov (United States)

    Almeida, Susana; Holcombe, Liz; Pianosi, Francesca; Wagener, Thorsten

    2015-04-01

    Landslides can have large negative societal and economic impacts, such as loss of life and damage to infrastructure. However, the ability of slope stability assessment to guide management is limited by high levels of uncertainty in model predictions. Many of these uncertainties cannot be easily quantified, such as those linked to climate change and other future socio-economic conditions, restricting the usefulness of traditional decision analysis tools. Deep uncertainty can be managed more effectively by developing robust, but not necessarily optimal, policies that are expected to perform adequately under a wide range of future conditions. Robust strategies are particularly valuable when the consequences of taking a wrong decision are high as is often the case of when managing natural hazard risks such as landslides. In our work a physically based numerical model of hydrologically induced slope instability (the Combined Hydrology and Stability Model - CHASM) is applied together with robust decision making to evaluate the most important uncertainties (storm events, groundwater conditions, surface cover, slope geometry, material strata and geotechnical properties) affecting slope stability. Specifically, impacts of climate change on long-term slope stability are incorporated, accounting for the deep uncertainty in future climate projections. Our findings highlight the potential of robust decision making to aid decision support for landslide hazard reduction and risk management under conditions of deep uncertainty.

  11. Modelling and simulation of concrete leaching under outdoor exposure conditions

    International Nuclear Information System (INIS)

    Schiopu, Nicoleta; Tiruta-Barna, Ligia; Jayr, Emmanuel; Mehu, Jacques; Moszkowicz, Pierre

    2009-01-01

    Recently, a demand regarding the assessment of release of dangerous substances from construction products was raised by European Commission which has issued the Mandate M/366 addressed to CEN. This action is in relation with the Essential Requirement No. 3 'Hygiene, Health and Environment' of the Construction Products Directive (89/106/EC). The potential hazard for environment and health may arise in different life cycle stages of a construction product. During the service life stage, the release of substances due to contact with the rain water is the main potential hazard source, as a consequence of the leaching phenomenon. The objective of this paper is to present the development of a coupled chemical-transport model for the case of a concrete based construction product, i.e. concrete paving slabs, exposed to rain water under outdoor exposure conditions. The development of the model is based on an iterative process of comparing the experimental results with the simulated results up to an acceptable fit. The experiments were conducted at laboratory scale (equilibrium and dynamic leaching tests) and field scale. The product was exposed for one year in two types of leaching scenarios under outdoor conditions, 'runoff' and 'stagnation', and the element release was monitored. The model was calibrated using the experimental data obtained at laboratory scale and validated against measured field data, by taking into account the specific rain water balance and the atmospheric CO 2 uptake as input parameters. The numerical tool used in order to model and simulate the leaching behaviour was PHREEQC, coupled with the Lawrence Livermore National Laboratory (LLNL) thermodynamic data base. The simulation results are satisfying and the paper demonstrates the feasibility of the modelling approach for the leaching behaviour assessment of concrete type construction materials

  12. Turing mechanism underlying a branching model for lung morphogenesis.

    Science.gov (United States)

    Xu, Hui; Sun, Mingzhu; Zhao, Xin

    2017-01-01

    The mammalian lung develops through branching morphogenesis. Two primary forms of branching, which occur in order, in the lung have been identified: tip bifurcation and side branching. However, the mechanisms of lung branching morphogenesis remain to be explored. In our previous study, a biological mechanism was presented for lung branching pattern formation through a branching model. Here, we provide a mathematical mechanism underlying the branching patterns. By decoupling the branching model, we demonstrated the existence of Turing instability. We performed Turing instability analysis to reveal the mathematical mechanism of the branching patterns. Our simulation results show that the Turing patterns underlying the branching patterns are spot patterns that exhibit high local morphogen concentration. The high local morphogen concentration induces the growth of branching. Furthermore, we found that the sparse spot patterns underlie the tip bifurcation patterns, while the dense spot patterns underlies the side branching patterns. The dispersion relation analysis shows that the Turing wavelength affects the branching structure. As the wavelength decreases, the spot patterns change from sparse to dense, the rate of tip bifurcation decreases and side branching eventually occurs instead. In the process of transformation, there may exists hybrid branching that mixes tip bifurcation and side branching. Since experimental studies have reported that branching mode switching from side branching to tip bifurcation in the lung is under genetic control, our simulation results suggest that genes control the switch of the branching mode by regulating the Turing wavelength. Our results provide a novel insight into and understanding of the formation of branching patterns in the lung and other biological systems.

  13. Mechanical Model for Dynamic Behavior of Concrete Under Impact Loading

    Science.gov (United States)

    Sun, Yuanxiang

    Concrete is a geo-material which is used substantively in the civil building and military safeguard. One coupled model of damage and plasticity to describe the complex behavior of concrete subjected to impact loading is proposed in this research work. The concrete is assumed as homogeneous continuum with pre-existing micro-cracks and micro-voids. Damage to concrete is caused due to micro-crack nucleation, growth and coalescence, and defined as the probability of fracture at a given crack density. It induces a decrease of strength and stiffness of concrete. Compaction of concrete is physically a collapse of the material voids. It produces the plastic strain in the concrete and, at the same time, an increase of the bulk modulus. In terms of crack growth model, micro-cracks are activated, and begin to propagate gradually. When crack density reaches a critical value, concrete takes place the smashing destroy. The model parameters for mortar are determined using plate impact experiment with uni-axial strain state. Comparison with the test results shows that the proposed model can give consistent prediction of the impact behavior of concrete. The proposed model may be used to design and analysis of concrete structures under impact and shock loading. This work is supported by State Key Laboratory of Explosion science and Technology, Beijing Institute of Technology (YBKT14-02).

  14. Nonlinear modeling of magnetorheological energy absorbers under impact conditions

    Science.gov (United States)

    Mao, Min; Hu, Wei; Choi, Young-Tai; Wereley, Norman M.; Browne, Alan L.; Ulicny, John; Johnson, Nancy

    2013-11-01

    Magnetorheological energy absorbers (MREAs) provide adaptive vibration and shock mitigation capabilities to accommodate varying payloads, vibration spectra, and shock pulses, as well as other environmental factors. A key performance metric is the dynamic range, which is defined as the ratio of the force at maximum field to the force in the absence of field. The off-state force is typically assumed to increase linearly with speed, but at the higher shaft speeds occurring in impact events, the off-state damping exhibits nonlinear velocity squared damping effects. To improve understanding of MREA behavior under high-speed impact conditions, this study focuses on nonlinear MREA models that can more accurately predict MREA dynamic behavior for nominal impact speeds of up to 6 m s-1. Three models were examined in this study. First, a nonlinear Bingham-plastic (BP) model incorporating Darcy friction and fluid inertia (Unsteady-BP) was formulated where the force is proportional to the velocity. Second, a Bingham-plastic model incorporating minor loss factors and fluid inertia (Unsteady-BPM) to better account for high-speed behavior was formulated. Third, a hydromechanical (HM) analysis was developed to account for fluid compressibility and inertia as well as minor loss factors. These models were validated using drop test data obtained using the drop tower facility at GM R&D Center for nominal drop speeds of up to 6 m s-1.

  15. Nonlinear modeling of magnetorheological energy absorbers under impact conditions

    International Nuclear Information System (INIS)

    Mao, Min; Hu, Wei; Choi, Young-Tai; Wereley, Norman M; Browne, Alan L; Ulicny, John; Johnson, Nancy

    2013-01-01

    Magnetorheological energy absorbers (MREAs) provide adaptive vibration and shock mitigation capabilities to accommodate varying payloads, vibration spectra, and shock pulses, as well as other environmental factors. A key performance metric is the dynamic range, which is defined as the ratio of the force at maximum field to the force in the absence of field. The off-state force is typically assumed to increase linearly with speed, but at the higher shaft speeds occurring in impact events, the off-state damping exhibits nonlinear velocity squared damping effects. To improve understanding of MREA behavior under high-speed impact conditions, this study focuses on nonlinear MREA models that can more accurately predict MREA dynamic behavior for nominal impact speeds of up to 6 m s −1 . Three models were examined in this study. First, a nonlinear Bingham-plastic (BP) model incorporating Darcy friction and fluid inertia (Unsteady-BP) was formulated where the force is proportional to the velocity. Second, a Bingham-plastic model incorporating minor loss factors and fluid inertia (Unsteady-BPM) to better account for high-speed behavior was formulated. Third, a hydromechanical (HM) analysis was developed to account for fluid compressibility and inertia as well as minor loss factors. These models were validated using drop test data obtained using the drop tower facility at GM R and D Center for nominal drop speeds of up to 6 m s −1 . (paper)

  16. Modeling concrete under severe conditions as a multiphase material

    Energy Technology Data Exchange (ETDEWEB)

    Dal Pont, S., E-mail: dalpont@lcpc.f [Division Betons et Composites Cimentaires, BCC-LCPC, 58 Bd.Lefebvre 75738 Paris cedex 15 (France); Meftah, F. [Laboratoire Mecanique et Materiaux du Genie Civil, Universite Cergy-Pontoise, 5 mail Gay Lussac, Neuville-sur-Oise, 95031 Cergy-Pontoise Cedex (France); Schrefler, B.A. [Dipartimento di Costruzioni e Trasporti, Universita di Padova, via Marzolo 9, 35131 Padova (Italy)

    2011-03-15

    The description as well as the prediction of the behavior of concrete under severe high temperature-pressure loading such as those typical of a loss-of-coolant accidental scenario considered for PWR containment buildings, matter in the study of such engineering applications and are also of interest in other fields such as safety evaluations during fire. The purpose of this paper is to present a flexible staggered finite element thermo-hygral model and then to use it as a numerical tool to determine the temperature and gas pressure fields that develop in concrete when heated.

  17. Replenishment policy for an inventory model under inflation

    Science.gov (United States)

    Singh, Vikramjeet; Saxena, Seema; Singh, Pushpinder; Mishra, Nitin Kumar

    2017-07-01

    The purpose of replenishment is to keep the flow of inventory in the system. To determine an optimal replenishment policy is a great challenge in developing an inventory model. Inflation is defined as the rate at which the prices of goods and services are rising over a time period. The cost parameters are affected by the rate of inflation. High rate of inflation affects the organizations financial conditions. Based on the above backdrop the present paper proposes the retailers replenishment policy for deteriorating items with different cycle lengths under inflation. The shortages are partially backlogged. At last numerical examples validate the results.

  18. Constitutive model and electroplastic analysis of structures under cyclic loading

    International Nuclear Information System (INIS)

    Wang, X.; Lei, Y; Du, Q.

    1989-01-01

    Many engineering structures in nuclear reactors, thermal power stations, chemical plants and aerospace vehicles are subjected to cyclic mechanic-thermal loading, which is the main cause of structural fatigue failure. Over the past twenty years, designers and researchers have paid great attention to the research on life prediction and elastoplastic analysis of structures under cyclic loading. One of the key problems in elastoplastic analysis is to construct a reasonable constitutive model for cyclic plasticity. In the paper, the constitutive equations are briefly outlined. Then, the model is implemented in a finite element code to predict the response of cyclic loaded structural components such as a double-edge-notched plate, a grooved bar and a nozzle in spherical shell. Numerical results are compared with those from other theories and experiments

  19. Dynamic malware containment under an epidemic model with alert

    Science.gov (United States)

    Zhang, Tianrui; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan

    2017-03-01

    Alerting at the early stage of malware invasion turns out to be an important complement to malware detection and elimination. This paper addresses the issue of how to dynamically contain the prevalence of malware at a lower cost, provided alerting is feasible. A controlled epidemic model with alert is established, and an optimal control problem based on the epidemic model is formulated. The optimality system for the optimal control problem is derived. The structure of an optimal control for the proposed optimal control problem is characterized under some conditions. Numerical examples show that the cost-efficiency of an optimal control strategy can be enhanced by adjusting the upper and lower bounds on admissible controls.

  20. Groundwater flow modelling under ice sheet conditions. Scoping calculations

    Energy Technology Data Exchange (ETDEWEB)

    Jaquet, O.; Namar, R. (In2Earth Modelling Ltd (Switzerland)); Jansson, P. (Dept. of Physical Geography and Quaternary Geology, Stockholm Univ., Stockholm (Sweden))

    2010-10-15

    The potential impact of long-term climate changes has to be evaluated with respect to repository performance and safety. In particular, glacial periods of advancing and retreating ice sheet and prolonged permafrost conditions are likely to occur over the repository site. The growth and decay of ice sheets and the associated distribution of permafrost will affect the groundwater flow field and its composition. As large changes may take place, the understanding of groundwater flow patterns in connection to glaciations is an important issue for the geological disposal at long term. During a glacial period, the performance of the repository could be weakened by some of the following conditions and associated processes: - Maximum pressure at repository depth (canister failure). - Maximum permafrost depth (canister failure, buffer function). - Concentration of groundwater oxygen (canister corrosion). - Groundwater salinity (buffer stability). - Glacially induced earthquakes (canister failure). Therefore, the GAP project aims at understanding key hydrogeological issues as well as answering specific questions: - Regional groundwater flow system under ice sheet conditions. - Flow and infiltration conditions at the ice sheet bed. - Penetration depth of glacial meltwater into the bedrock. - Water chemical composition at repository depth in presence of glacial effects. - Role of the taliks, located in front of the ice sheet, likely to act as potential discharge zones of deep groundwater flow. - Influence of permafrost distribution on the groundwater flow system in relation to build-up and thawing periods. - Consequences of glacially induced earthquakes on the groundwater flow system. Some answers will be provided by the field data and investigations; the integration of the information and the dynamic characterisation of the key processes will be obtained using numerical modelling. Since most of the data are not yet available, some scoping calculations are performed using the

  1. Groundwater flow modelling under ice sheet conditions. Scoping calculations

    International Nuclear Information System (INIS)

    Jaquet, O.; Namar, R.; Jansson, P.

    2010-10-01

    The potential impact of long-term climate changes has to be evaluated with respect to repository performance and safety. In particular, glacial periods of advancing and retreating ice sheet and prolonged permafrost conditions are likely to occur over the repository site. The growth and decay of ice sheets and the associated distribution of permafrost will affect the groundwater flow field and its composition. As large changes may take place, the understanding of groundwater flow patterns in connection to glaciations is an important issue for the geological disposal at long term. During a glacial period, the performance of the repository could be weakened by some of the following conditions and associated processes: - Maximum pressure at repository depth (canister failure). - Maximum permafrost depth (canister failure, buffer function). - Concentration of groundwater oxygen (canister corrosion). - Groundwater salinity (buffer stability). - Glacially induced earthquakes (canister failure). Therefore, the GAP project aims at understanding key hydrogeological issues as well as answering specific questions: - Regional groundwater flow system under ice sheet conditions. - Flow and infiltration conditions at the ice sheet bed. - Penetration depth of glacial meltwater into the bedrock. - Water chemical composition at repository depth in presence of glacial effects. - Role of the taliks, located in front of the ice sheet, likely to act as potential discharge zones of deep groundwater flow. - Influence of permafrost distribution on the groundwater flow system in relation to build-up and thawing periods. - Consequences of glacially induced earthquakes on the groundwater flow system. Some answers will be provided by the field data and investigations; the integration of the information and the dynamic characterisation of the key processes will be obtained using numerical modelling. Since most of the data are not yet available, some scoping calculations are performed using the

  2. Modelling of Performance of Caisson Type Breakwaters under Extreme Waves

    Science.gov (United States)

    Güney Doǧan, Gözde; Özyurt Tarakcıoǧlu, Gülizar; Baykal, Cüneyt

    2016-04-01

    Many coastal structures are designed without considering loads of tsunami-like waves or long waves although they are constructed in areas prone to encounter these waves. Performance of caisson type breakwaters under extreme swells is tested in Middle East Technical University (METU) Coastal and Ocean Engineering Laboratory. This paper presents the comparison of pressure measurements taken along the surface of caisson type breakwaters and obtained from numerical modelling of them using IH2VOF as well as damage behavior of the breakwater under the same extreme swells tested in a wave flume at METU. Experiments are conducted in the 1.5 m wide wave flume, which is divided into two parallel sections (0.74 m wide each). A piston type of wave maker is used to generate the long wave conditions located at one end of the wave basin. Water depth is determined as 0.4m and kept constant during the experiments. A caisson type breakwater is constructed to one side of the divided flume. The model scale, based on the Froude similitude law, is chosen as 1:50. 7 different wave conditions are applied in the tests as the wave period ranging from 14.6 s to 34.7 s, wave heights from 3.5 m to 7.5 m and steepness from 0.002 to 0.015 in prototype scale. The design wave parameters for the breakwater were 5m wave height and 9.5s wave period in prototype. To determine the damage of the breakwater which were designed according to this wave but tested under swell waves, video and photo analysis as well as breakwater profile measurements before and after each test are performed. Further investigations are carried out about the acting wave forces on the concrete blocks of the caisson structures via pressure measurements on the surfaces of these structures where the structures are fixed to the channel bottom minimizing. Finally, these pressure measurements will be compared with the results obtained from the numerical study using IH2VOF which is one of the RANS models that can be applied to simulate

  3. Radium-226 equilibrium between water and lake herring, Coregonus artedii, tissues attained within fish lifetime: confirmation in this species of one assumption in the simple linear concentration factor model

    International Nuclear Information System (INIS)

    Clulow, F.V.; Pyle, G.G.

    1997-01-01

    Equilibrium conditions are assumed in the simple linear concentration factor model commonly used in simulations of contaminant flow through ecosystems and in dose and risk calculations. Predictions derived from a power function model have suggested that if the time scale of the food-chain transfer is less than six years in fish, radium-226 equilibrium will not be achieved in nature, thereby violating the equilibrium requirement in the concentration factor model. Our results indicate 226 Ra equilibrium is achieved in a natural population of lake herring (Coregonus artedii), contrary to predictions of the power function model. (author)

  4. A quasilinear model for solute transport under unsaturated flow

    International Nuclear Information System (INIS)

    Houseworth, J.E.; Leem, J.

    2009-01-01

    We developed an analytical solution for solute transport under steady-state, two-dimensional, unsaturated flow and transport conditions for the investigation of high-level radioactive waste disposal. The two-dimensional, unsaturated flow problem is treated using the quasilinear flow method for a system with homogeneous material properties. Dispersion is modeled as isotropic and is proportional to the effective hydraulic conductivity. This leads to a quasilinear form for the transport problem in terms of a scalar potential that is analogous to the Kirchhoff potential for quasilinear flow. The solutions for both flow and transport scalar potentials take the form of Fourier series. The particular solution given here is for two sources of flow, with one source containing a dissolved solute. The solution method may easily be extended, however, for any combination of flow and solute sources under steady-state conditions. The analytical results for multidimensional solute transport problems, which previously could only be solved numerically, also offer an additional way to benchmark numerical solutions. An analytical solution for two-dimensional, steady-state solute transport under unsaturated flow conditions is presented. A specific case with two sources is solved but may be generalized to any combination of sources. The analytical results complement numerical solutions, which were previously required to solve this class of problems.

  5. Concrete structures vulnerability under impact: characterization, modeling, and validation - Concrete slabs vulnerability under impact: characterization, modeling, and validation

    International Nuclear Information System (INIS)

    Xuan Dung Vu

    2013-01-01

    Concrete is a material whose behavior is complex, especially in cases of extreme loads. The objective of this thesis is to carry out an experimental characterization of the behavior of concrete under impact-generated stresses (confined compression and dynamic traction) and to develop a robust numerical tool to reliably model this behavior. In the experimental part, we have studied concrete samples from the VTT center (Technical Research Center of Finland). At first, quasi-static triaxial compressions with the confinement varies from 0 MPa (unconfined compression test) to 600 MPa were realized. The stiffness of the concrete increases with confinement pressure because of the reduction of porosity. Therefore, the maximum shear strength of the concrete is increased. The presence of water plays an important role when the degree of saturation is high and the concrete is subjected to high confinement pressure. Beyond a certain level of confinement pressure, the maximum shear strength of concrete decreases with increasing water content. The effect of water also influences the volumetric behavior of concrete. When all free pores are closed as a result of compaction, the low compressibility of the water prevents the deformation of the concrete, whereby the wet concrete is less deformed than the dry concrete for the same mean stress. The second part of the experimental program concerns dynamic tensile tests at different loading velocities, and different moisture conditions of concrete. The results show that the tensile strength of concrete C50 may increase up to 5 times compared to its static strength for a strain rate of about 100 s -1 . In the numerical part, we are interested in improving an existing constitutive coupled model of concrete behavior called PRM (Pontiroli-Rouquand-Mazars) to predict the concrete behavior under impact. This model is based on a coupling between a damage model which is able to describe the degradation mechanisms and cracking of the concrete at

  6. Modeling of thermal explosion under pressure in metal ceramic systems

    International Nuclear Information System (INIS)

    Shapiro, M.; Dudko, V.; Skachek, B.; Matvienko, A.; Gotman, I.; Gutmanas, E.Y.

    1998-01-01

    The process of reactive in situ synthesis of dense ceramic matrix composites in Ti-B-C, Ti-B-N, Ti-Si-N systems is modeled. These ceramics are fabricated on the basis of compacted blends of ceramic powders, namely Ti-B 4 C and/or Ti-BN. The objectives of the project are to identify and investigate the optimal thermal conditions preferable for production of fully dense ceramic matrix composites. Towards this goal heat transfer and combustion in dense and porous ceramic blends are investigated during monotonous heating at a constant rate. This process is modeled using a heat transfer-combustion model with kinetic parameters determined from the differential thermal analysis of the experimental data. The kinetic burning parameters and the model developed are further used to describe the thermal explosion synthesis in a restrained die under pressure. It is shown that heat removal from the reaction zone affects the combustion process and the final phase composition

  7. Model tracking dual stochastic controller design under irregular internal noises

    International Nuclear Information System (INIS)

    Lee, Jong Bok; Heo, Hoon; Cho, Yun Hyun; Ji, Tae Young

    2006-01-01

    Although many methods about the control of irregular external noise have been introduced and implemented, it is still necessary to design a controller that will be more effective and efficient methods to exclude for various noises. Accumulation of errors due to model tracking, internal noises (thermal noise, shot noise and l/f noise) that come from elements such as resistor, diode and transistor etc. in the circuit system and numerical errors due to digital process often destabilize the system and reduce the system performance. New stochastic controller is adopted to remove those noises using conventional controller simultaneously. Design method of a model tracking dual controller is proposed to improve the stability of system while removing external and internal noises. In the study, design process of the model tracking dual stochastic controller is introduced that improves system performance and guarantees robustness under irregular internal noises which can be created internally. The model tracking dual stochastic controller utilizing F-P-K stochastic control technique developed earlier is implemented to reveal its performance via simulation

  8. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  9. The role of uncertainty in supply chains under dynamic modeling

    Directory of Open Access Journals (Sweden)

    M. Fera

    2017-01-01

    Full Text Available The uncertainty in the supply chains (SCs for manufacturing and services firms is going to be, over the coming decades, more important for the companies that are called to compete in a new globalized economy. Risky situations for manufacturing are considered in trying to individuate the optimal positioning of the order penetration point (OPP. It aims at defining the best level of information of the client’s order going back through the several supply chain (SC phases, i.e. engineering, procurement, production and distribution. This work aims at defining a system dynamics model to assess competitiveness coming from the positioning of the order in different SC locations. A Taguchi analysis has been implemented to create a decision map for identifying possible strategic decisions under different scenarios and with alternatives for order location in the SC levels. Centralized and decentralized strategies for SC integration are discussed. In the model proposed, the location of OPP is influenced by the demand variation, production time, stock-outs and stock amount. Results of this research are as follows: (i customer-oriented strategies are preferable under high volatility of demand, (ii production-focused strategies are suggested when the probability of stock-outs is high, (iii no specific location is preferable if a centralized control architecture is implemented, (iv centralization requires cooperation among partners to achieve the SC optimum point, (v the producer must not prefer the OPP location at the Retailer level when the general strategy is focused on a decentralized approach.

  10. EARLY-TYPE GALAXIES AT z ∼ 1.3. II. MASSES AND AGES OF EARLY-TYPE GALAXIES IN DIFFERENT ENVIRONMENTS AND THEIR DEPENDENCE ON STELLAR POPULATION MODEL ASSUMPTIONS

    International Nuclear Information System (INIS)

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Nakata, F.; Kodama, T.; Stanford, S. A.; Rettura, A.; Jee, M. J.; Holden, B. P.; Illingworth, G.; Postman, M.; White, R. L.; Rosati, P.; Blakeslee, J. P.; Demarco, R.; Eisenhardt, P.; Tanaka, M.

    2011-01-01

    We have derived masses and ages for 79 early-type galaxies (ETGs) in different environments at z ∼ 1.3 in the Lynx supercluster and in the GOODS/CDF-S field using multi-wavelength (0.6-4.5 μm; KPNO, Palomar, Keck, Hubble Space Telescope, Spitzer) data sets. At this redshift the contribution of the thermally pulsing asymptotic giant branch (TP-AGB) phase is important for ETGs, and the mass and age estimates depend on the choice of the stellar population model used in the spectral energy distribution fits. We describe in detail the differences among model predictions for a large range of galaxy ages, showing the dependence of these differences on age. Current models still yield large uncertainties. While recent models from Maraston and Charlot and Bruzual offer better modeling of the TP-AGB phase with respect to less recent Bruzual and Charlot models, their predictions do not often match. The modeling of this TP-AGB phase has a significant impact on the derived parameters for galaxies observed at high redshift. Some of our results do not depend on the choice of the model: for all models, the most massive galaxies are the oldest ones, independent of the environment. When using the Maraston and Charlot and Bruzual models, the mass distribution is similar in the clusters and in the groups, whereas in our field sample there is a deficit of massive (M ∼> 10 11 M sun ) ETGs. According to those last models, ETGs belonging to the cluster environment host on average older stars with respect to group and field populations. This difference is less significant than the age difference in galaxies of different masses.

  11. Kinetics of UO2(s) dissolution under reducing conditions: Numerical modelling

    International Nuclear Information System (INIS)

    Puigdomenech, I.; Casas, I.; Bruno, J.

    1990-05-01

    A numerical model is presented that describes the dissolution and precipitation of UO 2 (s) under reducing conditions. For aqueous solutions with pH>4, main reaction is: UO 2 (s)+2H 2 O↔U(OH) 4 (aq). The rate constant for the precipitation reaction is found to be log(k p )=-1.2±0.2 h -1 m -2 , while the value for the rate constant of the dissolution reaction is log(k d )=-9.0±0.2 mol/(1 h m 2 ). Most of the experiments reported in the literature show a fast initial dissolution of a surface film of hexavalent uranium oxide. Making the assumption that the chemical composition of the surface coating is U 3 O 7 (s), we have derived a mechanism for this process, and its rate constants have been obtained. The influence of HCO 3 - and CO 3 2- on the mechanism of dissolution and precipitation of UO 2 (s) is still unclear. From the solubility measurements reported, one may conclude that the identity of the aqueous complexes in solution is not well known. Therefore it is not possible to make a mechanistic interpretation of the kinetic data in carbonate medium. (orig.)

  12. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    NARCIS (Netherlands)

    Ernst, Anja F.; Albers, Casper J.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated

  13. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Science.gov (United States)

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  14. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Directory of Open Access Journals (Sweden)

    Anja F. Ernst

    2017-05-01

    Full Text Available Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  15. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, R.S.; Alonso, D.; McKane, A.J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the

  16. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  17. Modeling of electron behaviors under microwave electric field in methane and air pre-mixture gas plasma assisted combustion

    Science.gov (United States)

    Akashi, Haruaki; Sasaki, K.; Yoshinaga, T.

    2011-10-01

    Recently, plasma-assisted combustion has been focused on for achieving more efficient combustion way of fossil fuels, reducing pollutants and so on. Shinohara et al has reported that the flame length of methane and air premixed burner shortened by irradiating microwave power without increase of gas temperature. This suggests that electrons heated by microwave electric field assist the combustion. They also measured emission from 2nd Positive Band System (2nd PBS) of nitrogen during the irradiation. To clarify this mechanism, electron behavior under microwave power should be examined. To obtain electron transport parameters, electron Monte Carlo simulations in methane and air mixture gas have been done. A simple model has been developed to simulate inside the flame. To make this model simple, some assumptions are made. The electrons diffuse from the combustion plasma region. And the electrons quickly reach their equilibrium state. And it is found that the simulated emission from 2nd PBS agrees with the experimental result. Recently, plasma-assisted combustion has been focused on for achieving more efficient combustion way of fossil fuels, reducing pollutants and so on. Shinohara et al has reported that the flame length of methane and air premixed burner shortened by irradiating microwave power without increase of gas temperature. This suggests that electrons heated by microwave electric field assist the combustion. They also measured emission from 2nd Positive Band System (2nd PBS) of nitrogen during the irradiation. To clarify this mechanism, electron behavior under microwave power should be examined. To obtain electron transport parameters, electron Monte Carlo simulations in methane and air mixture gas have been done. A simple model has been developed to simulate inside the flame. To make this model simple, some assumptions are made. The electrons diffuse from the combustion plasma region. And the electrons quickly reach their equilibrium state. And it is found

  18. A Risk-Free Protection Index Model for Portfolio Selection with Entropy Constraint under an Uncertainty Framework

    Directory of Open Access Journals (Sweden)

    Jianwei Gao

    2017-02-01

    Full Text Available This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free interest rate we define a risk-free protection index (RFPI, which can measure the protection degree when the loss of risk assets happens. Third, note that the proportion entropy serves as a complementary means to reduce the risk by the preset diversification requirement. We put forward a risk-free protection index model with an entropy constraint under an uncertainty framework by applying the RFPI, Huang’s risk index model (RIM, and mean-variance-entropy model (MVEM. Furthermore, to solve our portfolio model, an algorithm is given to estimate the uncertain expected return and standard deviation of different risk assets by applying the Delphi method. Finally, an example is provided to show that the risk-free protection index model performs better than the traditional MVEM and RIM.

  19. Procurement Under The UNCITRAL Model Law: A Southern Africa Perspective

    Directory of Open Access Journals (Sweden)

    Stephen De La Harpe

    2015-12-01

    Full Text Available In Africa, economic integration, realised through regional integration, is seen as one of the driving factors that will improve the lives of its people. To enable regionalisation, economic growth and to unlock the potential of Africa its infrastructure will have to be improved. Infrastructure will on the whole be realised through public procurement. The stages for opening up procurement markets, referred to by Yukins and Schooner, is discussed and it is concluded that the states in SADC is still in the initial stages of opening its public procurement markets for regional competition. Although COMESA is not yet in full compliance with all four the stages great strides have been made and have elements of all stages been addressed. Because of the influence the Model Law has already played in COMESA, and the rest of Africa, it would be contra productive should SADC not take the same route as COMESA. If regard is had to the four categories of procurement rules that serves as barriers to national procurement markets, as set out by Arrowsmith it is clear that all of these are present in most SADC member states. Also in the case of COMESA these barriers still exist albeit to a lesser extent. What is necessary is a phased approach to address all of these barriers. This will be possible under the UNCITRAL Model Law as the 2011 Model Law does provide for the possibility of complying with international obligations and for states to allow for socio economic objectives in their procurement regimes. There can be little doubt that the 1994 Model Law has already had a marked influence on public procurement regulation in Africa and that the 2011 Model Law will in future continue to do so. Public procurement is essential for economic development and is the integration and harmonisation thereof on a regional basis the first step In this regard SADC, and especially South Africa, has an important role to play.

  20. Sustainable infrastructure system modeling under uncertainties and dynamics

    Science.gov (United States)

    Huang, Yongxi

    Infrastructure systems support human activities in transportation, communication, water use, and energy supply. The dissertation research focuses on critical transportation infrastructure and renewable energy infrastructure systems. The goal of the research efforts is to improve the sustainability of the infrastructure systems, with an emphasis on economic viability, system reliability and robustness, and environmental impacts. The research efforts in critical transportation infrastructure concern the development of strategic robust resource allocation strategies in an uncertain decision-making environment, considering both uncertain service availability and accessibility. The study explores the performances of different modeling approaches (i.e., deterministic, stochastic programming, and robust optimization) to reflect various risk preferences. The models are evaluated in a case study of Singapore and results demonstrate that stochastic modeling methods in general offers more robust allocation strategies compared to deterministic approaches in achieving high coverage to critical infrastructures under risks. This general modeling framework can be applied to other emergency service applications, such as, locating medical emergency services. The development of renewable energy infrastructure system development aims to answer the following key research questions: (1) is the renewable energy an economically viable solution? (2) what are the energy distribution and infrastructure system requirements to support such energy supply systems in hedging against potential risks? (3) how does the energy system adapt the dynamics from evolving technology and societal needs in the transition into a renewable energy based society? The study of Renewable Energy System Planning with Risk Management incorporates risk management into its strategic planning of the supply chains. The physical design and operational management are integrated as a whole in seeking mitigations against the

  1. Model of personal consumption under conditions of modern economy

    Science.gov (United States)

    Rakhmatullina, D. K.; Akhmetshina, E. R.; Ignatjeva, O. A.

    2017-12-01

    In the conditions of the modern economy, in connection with the development of production, the expansion of the market for goods and services, its differentiation, active use of marketing tools in the sphere of sales, changes occur in the system of values and consumer needs. Motives that drive the consumer are transformed, stimulating it to activity. The article presents a model of personal consumption that takes into account modern trends in consumer behavior. The consumer, making a choice, seeks to maximize the overall utility from consumption, physiological and socio-psychological satisfaction, in accordance with his expectations, preferences and conditions of consumption. The system of his preferences is formed under the influence of factors of a different nature. It is also shown that the structure of consumer spending allows us to characterize and predict its further behavior in the market. Based on the proposed model and analysis of current trends in consumer behavior, conclusions and recommendations have been made that can be used by legislative and executive government bodies, business organizations, research centres and other structures to form a methodological and analytical tool for preparing a forecast model of consumption.

  2. Robust Optimization Model for Production Planning Problem under Uncertainty

    Directory of Open Access Journals (Sweden)

    Pembe GÜÇLÜ

    2017-01-01

    Full Text Available Conditions of businesses change very quickly. To take into account the uncertainty engendered by changes has become almost a rule while planning. Robust optimization techniques that are methods of handling uncertainty ensure to produce less sensitive results to changing conditions. Production planning, is to decide from which product, when and how much will be produced, with a most basic definition. Modeling and solution of the Production planning problems changes depending on structure of the production processes, parameters and variables. In this paper, it is aimed to generate and apply scenario based robust optimization model for capacitated two-stage multi-product production planning problem under parameter and demand uncertainty. With this purpose, production planning problem of a textile company that operate in İzmir has been modeled and solved, then deterministic scenarios’ and robust method’s results have been compared. Robust method has provided a production plan that has higher cost but, will result close to feasible and optimal for most of the different scenarios in the future.

  3. Modelling crop yield in Iberia under drought conditions

    Science.gov (United States)

    Ribeiro, Andreia; Páscoa, Patrícia; Russo, Ana; Gouveia, Célia

    2017-04-01

    The improved assessment of the cereal yield and crop loss under drought conditions are essential to meet the increasing economy demands. The growing frequency and severity of the extreme drought conditions in the Iberian Peninsula (IP) has been likely responsible for negative impacts on agriculture, namely on crop yield losses. Therefore, a continuous monitoring of vegetation activity and a reliable estimation of drought impacts is crucial to contribute for the agricultural drought management and development of suitable information tools. This works aims to assess the influence of drought conditions in agricultural yields over the IP, considering cereal yields from mainly rainfed agriculture for the provinces with higher productivity. The main target is to develop a strategy to model drought risk on agriculture for wheat yield at a province level. In order to achieve this goal a combined assessment was made using a drought indicator (Standardized Precipitation Evapotranspiration Index, SPEI) to evaluate drought conditions together with a widely used vegetation index (Normalized Difference Vegetation Index, NDVI) to monitor vegetation activity. A correlation analysis between detrended wheat yield and SPEI was performed in order to assess the vegetation response to each time scale of drought occurrence and also identify the moment of the vegetative cycle when the crop yields are more vulnerable to drought conditions. The time scales and months of SPEI, together with the months of NDVI, better related with wheat yield were chosen to perform a multivariate regression analysis to simulate crop yield. Model results are satisfactory and highlighted the usefulness of such analysis in the framework of developing a drought risk model for crop yields. In terms of an operational point of view, the results aim to contribute to an improved understanding of crop yield management under dry conditions, particularly adding substantial information on the advantages of combining

  4. Transient modelling of a natural circulation loop under variable pressure

    International Nuclear Information System (INIS)

    Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian; Instituto de Engenharia Nuclear

    2017-01-01

    The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

  5. Mechanical Modeling of a WIPP Drum Under Pressure

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Jeffrey A. [Sandia National Laboratories, Albuquerque, NM (United States)

    2014-11-25

    Mechanical modeling was undertaken to support the Waste Isolation Pilot Plant (WIPP) technical assessment team (TAT) investigating the February 14th 2014 event where there was a radiological release at the WIPP. The initial goal of the modeling was to examine if a mechanical model could inform the team about the event. The intention was to have a model that could test scenarios with respect to the rate of pressurization. It was expected that the deformation and failure (inability of the drum to contain any pressure) would vary according to the pressurization rate. As the work progressed there was also interest in using the mechanical analysis of the drum to investigate what would happen if a drum pressurized when it was located under a standard waste package. Specifically, would the deformation be detectable from camera views within the room. A finite element model of a WIPP 55-gallon drum was developed that used all hex elements. Analyses were conducted using the explicit transient dynamics module of Sierra/SM to explore potential pressurization scenarios of the drum. Theses analysis show similar deformation patterns to documented pressurization tests of drums in the literature. The calculated failure pressures from previous tests documented in the literature vary from as little as 16 psi to 320 psi. In addition, previous testing documented in the literature shows drums bulging but not failing at pressures ranging from 69 to 138 psi. The analyses performed for this study found the drums failing at pressures ranging from 35 psi to 75 psi. When the drums are pressurized quickly (in 0.01 seconds) there is significant deformation to the lid. At lower pressurization rates the deformation of the lid is considerably less, yet the lids will still open from the pressure. The analyses demonstrate the influence of pressurization rate on deformation and opening pressure of the drums. Analyses conducted with a substantial mass on top of the closed drum demonstrate that the

  6. Transient modelling of a natural circulation loop under variable pressure

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian, E-mail: avianna@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br, E-mail: faccini@ien.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Termo-Hidraulica Experimental

    2017-07-01

    The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

  7. Testing the assumptions of the pyrodiversity begets biodiversity hypothesis for termites in semi-arid Australia.

    Science.gov (United States)

    Davis, Hayley; Ritchie, Euan G; Avitabile, Sarah; Doherty, Tim; Nimmo, Dale G

    2018-04-01

    Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species' probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary.

  8. Using advanced surface complexation models for modelling soil chemistry under forests: Solling forest, Germany

    International Nuclear Information System (INIS)

    Bonten, Luc T.C.; Groenenberg, Jan E.; Meesenburg, Henning; Vries, Wim de

    2011-01-01

    Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: → Surface complexation models can be well applied in field studies. → Soil chemistry under a forest site is adequately modelled using generic parameters. → The model is easily extended with extra elements within the existing framework. → Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.

  9. Using advanced surface complexation models for modelling soil chemistry under forests: Solling forest, Germany

    Energy Technology Data Exchange (ETDEWEB)

    Bonten, Luc T.C., E-mail: luc.bonten@wur.nl [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Groenenberg, Jan E. [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Meesenburg, Henning [Northwest German Forest Research Station, Abt. Umweltkontrolle, Sachgebiet Intensives Umweltmonitoring, Goettingen (Germany); Vries, Wim de [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands)

    2011-10-15

    Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: > Surface complexation models can be well applied in field studies. > Soil chemistry under a forest site is adequately modelled using generic parameters. > The model is easily extended with extra elements within the existing framework. > Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.

  10. Modeling the behaviour of shape memory materials under large deformations

    Science.gov (United States)

    Rogovoy, A. A.; Stolbova, O. S.

    2017-06-01

    In this study, the models describing the behavior of shape memory alloys, ferromagnetic materials and polymers have been constructed, using a formalized approach to develop the constitutive equations for complex media under large deformations. The kinematic and constitutive equations, satisfying the principles of thermodynamics and objectivity, have been derived. The application of the Galerkin procedure to the systems of equations of solid mechanics allowed us to obtain the Lagrange variational equation and variational formulation of the magnetostatics problems. These relations have been tested in the context of the problems of finite deformation in shape memory alloys and ferromagnetic materials during forward and reverse martensitic transformations and in shape memory polymers during forward and reverse relaxation transitions from a highly elastic to a glassy state.

  11. A Stone Resource Assignment Model under the Fuzzy Environment

    Directory of Open Access Journals (Sweden)

    Liming Yao

    2012-01-01

    to tackle a stone resource assignment problem with the aim of decreasing dust and waste water emissions. On the upper level, the local government wants to assign a reasonable exploitation amount to each stone plant so as to minimize total emissions and maximize employment and economic profit. On the lower level, stone plants must reasonably assign stone resources to produce different stone products under the exploitation constraint. To deal with inherent uncertainties, the object functions and constraints are defuzzified using a possibility measure. A fuzzy simulation-based improved simulated annealing algorithm (FS-ISA is designed to search for the Pareto optimal solutions. Finally, a case study is presented to demonstrate the practicality and efficiency of the model. Results and a comparison analysis are presented to highlight the performance of the optimization method, which proves to be very efficient compared with other algorithms.

  12. Drug policy in sport: hidden assumptions and inherent contradictions.

    Science.gov (United States)

    Smith, Aaron C T; Stewart, Bob

    2008-03-01

    This paper considers the assumptions underpinning the current drugs-in-sport policy arrangements. We examine the assumptions and contradictions inherent in the policy approach, paying particular attention to the evidence that supports different policy arrangements. We find that the current anti-doping policy of the World Anti-Doping Agency (WADA) contains inconsistencies and ambiguities. WADA's policy position is predicated upon four fundamental principles; first, the need for sport to set a good example; secondly, the necessity of ensuring a level playing field; thirdly, the responsibility to protect the health of athletes; and fourthly, the importance of preserving the integrity of sport. A review of the evidence, however, suggests that sport is a problematic institution when it comes to setting a good example for the rest of society. Neither is it clear that sport has an inherent or essential integrity that can only be sustained through regulation. Furthermore, it is doubtful that WADA's anti-doping policy is effective in maintaining a level playing field, or is the best means of protecting the health of athletes. The WADA anti-doping policy is based too heavily on principals of minimising drug use, and gives insufficient weight to the minimisation of drug-related harms. As a result drug-related harms are being poorly managed in sport. We argue that anti-doping policy in sport would benefit from placing greater emphasis on a harm minimisation model.

  13. Stable isotopes and elasmobranchs: tissue types, methods, applications and assumptions.

    Science.gov (United States)

    Hussey, N E; MacNeil, M A; Olin, J A; McMeans, B C; Kinney, M J; Chapman, D D; Fisk, A T

    2012-04-01

    Stable-isotope analysis (SIA) can act as a powerful ecological tracer with which to examine diet, trophic position and movement, as well as more complex questions pertaining to community dynamics and feeding strategies or behaviour among aquatic organisms. With major advances in the understanding of the methodological approaches and assumptions of SIA through dedicated experimental work in the broader literature coupled with the inherent difficulty of studying typically large, highly mobile marine predators, SIA is increasingly being used to investigate the ecology of elasmobranchs (sharks, skates and rays). Here, the current state of SIA in elasmobranchs is reviewed, focusing on available tissues for analysis, methodological issues relating to the effects of lipid extraction and urea, the experimental dynamics of isotopic incorporation, diet-tissue discrimination factors, estimating trophic position, diet and mixing models and individual specialization and niche-width analyses. These areas are discussed in terms of assumptions made when applying SIA to the study of elasmobranch ecology and the requirement that investigators standardize analytical approaches. Recommendations are made for future SIA experimental work that would improve understanding of stable-isotope dynamics and advance their application in the study of sharks, skates and rays. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  14. Has the "Equal Environments" assumption been tested in twin studies?

    Science.gov (United States)

    Eaves, Lindon; Foley, Debra; Silberg, Judy

    2003-12-01

    A recurring criticism of the twin method for quantifying genetic and environmental components of human differences is the necessity of the so-called "equal environments assumption" (EEA) (i.e., that monozygotic and dizygotic twins experience equally correlated environments). It has been proposed to test the EEA by stratifying twin correlations by indices of the amount of shared environment. However, relevant environments may also be influenced by genetic differences. We present a model for the role of genetic factors in niche selection by twins that may account for variation in indices of the shared twin environment (e.g., contact between members of twin pairs). Simulations reveal that stratification of twin correlations by amount of contact can yield spurious evidence of large shared environmental effects in some strata and even give false indications of genotype x environment interaction. The stratification approach to testing the equal environments assumption may be misleading and the results of such tests may actually be consistent with a simpler theory of the role of genetic factors in niche selection.

  15. Oracle estimation of parametric models under boundary constraints.

    Science.gov (United States)

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  16. An experimental model of mycobacterial infection under corneal flaps

    Directory of Open Access Journals (Sweden)

    C.B.D. Adan

    2004-07-01

    Full Text Available In order to develop a new experimental animal model of infection with Mycobacterium chelonae in keratomileusis, we conducted a double-blind prospective study on 24 adult male New Zealand rabbits. One eye of each rabbit was submitted to automatic lamellar keratotomy with the automatic corneal shaper under general anesthesia. Eyes were immunosuppressed by a single local injection of methyl prednisolone. Twelve animals were inoculated into the keratomileusis interface with 1 µl of 10(6 heat-inactivated bacteria (heat-inactivated inoculum controls and 12 with 1 µl of 10(6 live bacteria. Trimethoprim drops (0.1%, w/v were used as prophylaxis for the surgical procedure every 4 h (50 µl, qid. Animals were examined by 2 observers under a slit lamp on the 1st, 3rd, 5th, 7th, 11th, 16th, and 23rd postoperative days. Slit lamp photographs were taken to document clinical signs. Animals were sacrificed when corneal disease was detected and corneal samples were taken for microbiological analysis. Eleven of 12 experimental rabbits developed corneal disease, and M. chelonae could be isolated from nine rabbits. Eleven of the 12 controls receiving a heat-inactivated inoculum did not develop corneal disease. M. chelonae was not isolated from any of the control rabbits receiving a heat-inactivated inoculum, or from the healthy cornea of control rabbits. Corneal infection by M. chelonae was successfully induced in rabbits submitted to keratomileusis. To our knowledge, this is the first animal model of M. chelonae infection following corneal flaps for refractive surgery to be described in the literature and can be used for the analysis of therapeutic responses.

  17. HYPROLOG: A New Logic Programming Language with Assumptions and Abduction

    DEFF Research Database (Denmark)

    Christiansen, Henning; Dahl, Veronica

    2005-01-01

    We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars. The lan......We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars....... The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...

  18. Public key cryptography from weaker assumptions

    DEFF Research Database (Denmark)

    Zottarel, Angela

    This dissertation is focused on the construction of public key cryptographic primitives and on the relative security analysis in a meaningful theoretic model. This work takes two orthogonal directions. In the first part, we study cryptographic constructions preserving their security properties also...... in the case the adversary is granted access to partial information about the secret state of the primitive. To do so, we work in an extension of the standard black-box model, a new framework where possible leakage from the secret state is taken into account. In particular, we give the first construction...

  19. Modeling dynamic behavior of superconducting maglev systems under external disturbances

    Science.gov (United States)

    Huang, Chen-Guang; Xue, Cun; Yong, Hua-Dong; Zhou, You-He

    2017-08-01

    For a maglev system, vertical and lateral displacements of the levitation body may simultaneously occur under external disturbances, which often results in changes in the levitation and guidance forces and even causes some serious malfunctions. To fully understand the effect of external disturbances on the levitation performance, in this work, we build a two-dimensional numerical model on the basis of Newton's second law of motion and a mathematical formulation derived from magnetoquasistatic Maxwell's equations together with a nonlinear constitutive relation between the electric field and the current density. By using this model, we present an analysis of dynamic behavior for two typical maglev systems consisting of an infinitely long superconductor and a guideway of different arrangements of infinitely long parallel permanent magnets. The results show that during the vertical movement, the levitation force is closely associated with the flux motion and the moving velocity of the superconductor. After being disturbed at the working position, the superconductor has a disturbance-induced initial velocity and then starts to periodically vibrate in both lateral and vertical directions. Meanwhile, the lateral and vertical vibration centers gradually drift along their vibration directions. The larger the initial velocity, the faster their vibration centers drift. However, the vertical drift of the vertical vibration center seems to be independent of the direction of the initial velocity. In addition, due to the lateral and vertical drifts, the equilibrium position of the superconductor in the maglev systems is not a space point but a continuous range.

  20. Integrated Bali Cattle Development Model Under Oil Palm Plantation

    Directory of Open Access Journals (Sweden)

    Rasali Hakim Matondang

    2015-09-01

    Full Text Available Bali cattle have several advantages such as high fertility and carcass percentage, easy adaptation to the new environment as well. Bali cattle productivity has not been optimal yet. This is due to one of the limitation of feed resources, decreasing of grazing and agricultural land. The aim of this paper is to describe Bali cattle development integrated with oil palm plantations, which is expected to improve productivity and increase Bali cattle population. This integration model is carried out by raising Bali cattle under oil palm plantation through nucleus estate scheme model or individual farmers estates business. Some of Bali cattle raising systems have been applied in the integration of palm plantation-Bali cattle. One of the intensive systems can increase daily weight gain of 0.8 kg/head, calfcrop of 35% per year and has the potency for industrial development of feed and organic fertilizer. In the semi-intensive system, it can improve the production of oil palm fruit bunches (PFB more than 10%, increase harvested-crop area to 15 ha/farmer and reduce the amount of inorganic fertilizer. The extensive system can produce calfcrop ³70%, improve ³30% of PFB, increase business scale ³13 cows/farmer and reduce weeding costs ³16%. Integrated Bali cattle development may provide positive added value for both, palm oil business and cattle business.

  1. Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty

    Science.gov (United States)

    Brown, C.; Lall, U.; Siegfried, T.

    2005-12-01

    Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of

  2. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Breakdown of Hydrostatic Assumption in Tidal Channel with Scour Holes

    Directory of Open Access Journals (Sweden)

    Chunyan Li

    2016-10-01

    Full Text Available Hydrostatic condition is a common assumption in tidal and subtidal motions in oceans and estuaries.. Theories with this assumption have been largely successful. However, there is no definite criteria separating the hydrostatic from the non-hydrostatic regimes in real applications because real problems often times have multiple scales. With increased refinement of high resolution numerical models encompassing smaller and smaller spatial scales, the need for non-hydrostatic models is increasing. To evaluate the vertical motion over bathymetric changes in tidal channels and assess the validity of the hydrostatic approximation, we conducted observations using a vessel-based acoustic Doppler current profiler (ADCP. Observations were made along a straight channel 18 times over two scour holes of 25 m deep, separated by 330 m, in and out of an otherwise flat 8 m deep tidal pass leading to the Lake Pontchartrain over a time period of 8 hours covering part of the diurnal tidal cycle. Out of the 18 passages over the scour holes, 11 of them showed strong upwelling and downwelling which resulted in the breakdown of hydrostatic condition. The maximum observed vertical velocity was ~ 0.35 m/s, a high value in a tidal channel, and the estimated vertical acceleration reached a high value of 1.76×10-2 m/s2. Analysis demonstrated that the barotropic non-hydrostatic acceleration was dominant. The cause of the non-hydrostatic flow was the that over steep slopes. This demonstrates that in such a system, the bathymetric variation can lead to the breakdown of hydrostatic conditions. Models with hydrostatic restrictions will not be able to correctly capture the dynamics in such a system with significant bathymetric variations particularly during strong tidal currents.

  4. Model evaluation of denitrification under rapid infiltration basin systems.

    Science.gov (United States)

    Akhavan, Maryam; Imhoff, Paul T; Andres, A Scott; Finsterle, Stefan

    2013-09-01

    Rapid Infiltration Basin Systems (RIBS) are used for disposing reclaimed wastewater into soil to achieve additional treatment before it recharges groundwater. Effluent from most new sequenced batch reactor wastewater treatment plants is completely nitrified, and denitrification (DNF) is the main reaction for N removal. To characterize effects of complex surface and subsurface flow patterns caused by non-uniform flooding on DNF, a coupled overland flow-vadose zone model is implemented in the multiphase flow and reactive transport simulator TOUGHREACT. DNF is simulated in two representative soils varying the application cycle, hydraulic loading rate, wastewater quality, water table depth, and subsurface heterogeneity. Simulations using the conventional specified flux boundary condition under-predict DNF by as much as 450% in sand and 230% in loamy sand compared to predictions from the coupled overland flow-vadose zone model, indicating that simulating coupled flow is critical for predicting DNF in cases where hydraulic loading rates are not sufficient to spread the wastewater over the whole basin. Smaller ratios of wetting to drying time and larger hydraulic loading rates result in greater water saturations, more anoxic conditions, and faster water transport in the vadose zone, leading to greater DNF. These results in combination with those from different water table depths explain why reported DNF varied with soil type and water table depth in previous field investigations. Across all simulations, cumulative percent DNF varies between 2 and 49%, indicating that NO₃ removal in RIBS may vary widely depending on operational procedures and subsurface conditions. These modeling results improve understanding of DNF in RIBS and suggest operational procedures that may improve NO₃ removal. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Globally COnstrained Local Function Approximation via Hierarchical Modelling, a Framework for System Modelling under Partial Information

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman

    2000-01-01

    be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....

  6. 76 FR 81966 - Agency Information Collection Activities; Proposed Collection; Comments Requested; Assumption of...

    Science.gov (United States)

    2011-12-29

    ... Indian country is subject to State criminal jurisdiction under Public Law 280 (18 U.S.C. 1162(a)) to... Collection; Comments Requested; Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian Country ACTION: 60-Day notice of information collection under review. The Department of Justice...

  7. Modelo Century de dinâmica da matéria orgânica do solo: equações e pressupostos Century model of soil organic matter dynamics: equations and assumptions

    Directory of Open Access Journals (Sweden)

    Luiz Fernando Carvalho Leite

    2003-08-01

    Full Text Available A modelagem de processos biológicos tem por objetivos o planejamento do uso da terra, o estabelecimento de padrões ambientais e as estimativas dos riscos reais e potenciais das atividades agrícolas e ambientais. Diversos modelos têm sido criados nos últimos 25 anos. Century é um modelo mecanístico que analisa em longo prazo a dinâmica da matéria orgânica do solo e de nutrientes no sistema solo-planta em diversos agroecossistemas. O submodelo de matéria orgânica do solo possui os compartimentos ativo (biomassa microbiana e produtos, lento (produtos microbianos e vegetais, fisicamente protegidos ou biologicamente resistentes à decomposição e passivo (quimicamente recalcitrante ou também fisicamente protegido com diferentes taxas de decomposição. Equações de primeira ordem são usadas para modelar todos os compartimentos da matéria orgânica do solo e a temperatura e umidade do solo modificam as taxas de decomposição. A reciclagem do compartimento ativo e a formação do passivo são controladas pelo teor de areia e de argila do solo, respectivamente. Os resíduos vegetais são divididos em compartimentos dependentes dos teores de lignina e nitrogênio. Por meio do modelo, pode-se relacionar matéria orgânica aos níveis de fertilidade e ao manejo atual e futuro, otimizando o entendimento das transformações dos nutrientes em solos de diversos agroecossistemas.The modeling of biological processes has as objectives the planning of land use, setting environmental standards and estimating the actual and potential risks of the agricultural and environmental activities. Several models have been created in the last 25 years. Century is a mechanistic model that analyzes in long-term the dynamics of soil organic matter and of nutrients in soil-plant system in several agroecosystems. The soil organic matter submodel has the active (microbial biomass and products, slow (plant and microbial products that are physically protected or

  8. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  9. Factor structure and concurrent validity of the world assumptions scale.

    Science.gov (United States)

    Elklit, Ask; Shevlin, Mark; Solomon, Zahava; Dekel, Rachel

    2007-06-01

    The factor structure of the World Assumptions Scale (WAS) was assessed by means of confirmatory factor analysis. The sample was comprised of 1,710 participants who had been exposed to trauma that resulted in whiplash. Four alternative models were specified and estimated using LISREL 8.72. A correlated 8-factor solution was the best explanation of the sample data. The estimates of reliability of eight subscales of the WAS ranged from .48 to .82. Scores from five subscales correlated significantly with trauma severity as measured by the Harvard Trauma Questionnaire, although the magnitude of the correlations was low to modest, ranging from .08 to -.43. It is suggested that the WAS has adequate psychometric properties for use in both clinical and research settings.

  10. Assumptions of Customer Knowledge Enablement in the Open Innovation Process

    Directory of Open Access Journals (Sweden)

    Jokubauskienė Raminta

    2017-08-01

    Full Text Available In the scientific literature, open innovation is one of the most effective means to innovate and gain a competitive advantage. In practice, there is a variety of open innovation activities, but, nevertheless, customers stand as the cornerstone in this area, since the customers’ knowledge is one of the most important sources of new knowledge and ideas. Evaluating the context where are the interactions of open innovation and customer knowledge enablement, it is necessary to take into account the importance of customer knowledge management. Increasingly it is highlighted that customers’ knowledge management facilitates the creation of innovations. However, it should be an examination of other factors that influence the open innovation, and, at the same time, customers’ knowledge management. This article presents a theoretical model, which reveals the assumptions of open innovation process and the impact on the firm’s performance.

  11. Finansal Varlıkları Fiyatlama Modelinin Analizi: Varsayımlar, Bulgular ve Hakkındaki Eleştiriler(An Analysis of Capital Asset Pricing Model: Assumptions, Arguments and Critics

    Directory of Open Access Journals (Sweden)

    Hakan Bilir

    2016-03-01

    Full Text Available Yatırım fırsatlarının değerlendirilmesi süreci beklene getiri ve riskin ölçümüne bağlıdır. Finansal Varlıkları Fiyatlama Modeli (CAPM, çok uzun yıllardır modern finans teorisinin temel taşlarından bir tanesini oluşturmaktadır. Model, varlıkların beklenen getirisi ve sistematik riski arasındaki basit doğrusal ilişkiyi ortaya koymaktadır. Model halen, sermaye maliyetinin hesaplanması, portföy yönetiminin performansının ölçülmesi ve yatırımların değerlendirilmesi amacıyla kullanılmaktadır. CAPM’in çekiciliği, riskin ve beklenen getiri ve risk arasındaki ilişkinin ölçümlenmesi konusundaki güçlü tahmin yeteneğinden gelmektedir. Bununla birlikte modelin bu yeteneği 30 yılı aşkın bir süredir akademisyenler ve uygulamacılar tarafından sorgulanmaktadır. Tartışmalar büyük ölçüde ampirik düzeyde gerçekleştirilmektedir. CAPM’in ampirik düzeydeki problemleri, çok sayıda basitleştirilmiş varsayımı içermesi nedeniyle teorik hatalardır. Çok sayıdaki gerçekçi olmayan varsayımlar modeli pratik olarak kullanışsız hale getirmektedir. Model ile ilgili temel eleştiriler ise risksiz faiz oranı, pazar portföyü ve beta katsayı üzerinde yoğunlaşmaktadır.

  12. Characteristics and modeling of spruce wood under dynamic compression load

    International Nuclear Information System (INIS)

    Eisenacher, Germar

    2014-01-01

    criterion uses linear interpolation of the strength of constrained and unconstrained spruce wood. Thus multiaxial stress states can be considered. The calculation of the crush tests showed the ability of the model to reproduce the basic strength characteristics of spruce wood. The effect of lateral constraint can be reproduced well due to the uncoupled evolution of the yield surface. On the contrary, the strength is overestimated for load under acute angles, which could be prevented using modified yield surfaces. The effects of strain rate and temperature are generally reproduced well but the scaling factors used should be improved. The calculation of a drop test with a test-package equipped with wood-filled impact limiters confi rmed the model's performance and produced feasible results. However, to create a verified impact limiter model further numerical and experimental investigations are necessary. This work makes an important contribution to the numerical stress analysis in the context of safety cases of transport packages.

  13. Models and algorithms for midterm production planning under uncertainty: application of proximal decomposition methods

    International Nuclear Information System (INIS)

    Lenoir, A.

    2008-01-01

    We focus in this thesis, on the optimization process of large systems under uncertainty, and more specifically on solving the class of so-called deterministic equivalents with the help of splitting methods. The underlying application we have in mind is the electricity unit commitment problem under climate, market and energy consumption randomness, arising at EDF. We set the natural time-space-randomness couplings related to this application and we propose two new discretization schemes to tackle the randomness one, each of them based on non-parametric estimation of conditional expectations. This constitute an alternative to the usual scenario tree construction. We use the mathematical model consisting of the sum of two convex functions, a separable one and a coupling one. On the one hand, this simplified model offers a general framework to study decomposition-coordination algorithms by elapsing technicality due to a particular choice of subsystems. On the other hand, the convexity assumption allows to take advantage of monotone operators theory and to identify proximal methods as fixed point algorithms. We underlie the differential properties of the generalized reactions we are looking for a fixed point in order to derive bounds on the speed of convergence. Then we examine two families of decomposition-coordination algorithms resulting from operator splitting methods, namely Forward-Backward and Rachford methods. We suggest some practical method of acceleration of the Rachford class methods. To this end, we analyze the method from a theoretical point of view, furnishing as a byproduct explanations to some numerical observations. Then we propose as a response some improvements. Among them, an automatic updating strategy of scaling factors can correct a potential bad initial choice. The convergence proof is made easier thanks to stability results of some operator composition with respect to graphical convergence provided before. We also submit the idea of introducing

  14. An examination of the impact of care giving styles (accommodation and skilful communication and support) on the one year outcome of adolescent anorexia nervosa: Testing the assumptions of the cognitive interpersonal model in anorexia nervosa.

    Science.gov (United States)

    Salerno, Laura; Rhind, Charlotte; Hibbs, Rebecca; Micali, Nadia; Schmidt, Ulrike; Gowers, Simon; Macdonald, Pamela; Goddard, Elizabeth; Todd, Gillian; Lo Coco, Gianluca; Treasure, Janet

    2016-02-01

    The cognitive interpersonal model predicts that parental caregiving style will impact on the rate of improvement of anorexia nervosa symptoms. The study aims to examine whether the absolute levels and the relative congruence between mothers' and fathers' care giving styles influenced the rate of change of their children's symptoms of anorexia nervosa over 12 months. Triads (n=54) consisting of patients with anorexia nervosa and both of their parents were included in the study. Caregivers completed the Caregiver Skills scale and the Accommodation and Enabling Scale at intake. Patients completed the Short Evaluation of Eating Disorders at intake and at monthly intervals for one year. Polynomial Hierarchical Linear Modeling was used for the analysis. There is a person/dose dependant relationship between accommodation and patients' outcome, i.e. when both mother and father are highly accommodating outcome is poor, if either is highly accommodating outcome is intermediate and if both parents are low on accommodation outcome is good. Outcome is also good if both parents or mother alone have high levels of carer skills and poor if both have low levels of skills. Including only a sub-sample of an adolescent clinical population; not considering time spent care giving, and reporting patient's self-reported outcome data limits the generalisability of the current findings. Accommodating and enabling behaviours by family members can serve to maintain eating disorder behaviours. However, skilful behaviours particularly by mothers, can aid recovery. Clinical interventions to optimise care giving skills and to reduce accommodation by both parents may be an important addition to treatment for anorexia nervosa. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Testing the rationality assumption using a design difference in the TV game show 'Jeopardy'

    OpenAIRE

    Sjögren Lindquist, Gabriella; Säve-Söderbergh, Jenny

    2006-01-01

    Abstract This paper empirically investigates the rationality assumption commonly applied in economic modeling by exploiting a design difference in the game-show Jeopardy between the US and Sweden. In particular we address the assumption of individuals’ capabilities to process complex mathematical problems to find optimal strategies. The vital difference is that US contestants are given explicit information before they act, while Swedish contestants individually need to calculate the same info...

  16. Robust nonlinear control of nuclear reactors under model uncertainty

    International Nuclear Information System (INIS)

    Park, Moon Ghu

    1993-02-01

    uncertainty. The performance specification in the boundary layer is also proposed. In the boundary layer, a direct adaptive controller is developed which consists of the adaptive proportional-integral-feed forward (PIF) gains. The essence of the controller is to divide the control into four different terms. Namely, the adaptive P-I-F gains and time-optimal controller are used to accomplish the specific control actions by each term. The robustness of the controller is guaranteed by the feedback of the estimated uncertainty and the performance specification given by the adaptation of PIF gains using the second method of Lyapunov. The newly developed control method is applied to the power tracking control of a nuclear reactor and the simulation results show great improvement in tracking performance compared with the conventional control methods. In addition, a constraint-accommodating adaptive control method is developed. The method is based on a dead-best identified plant model and a simple, but mathematically constructive, adaptation rule for the model-based PI feedback gains. The method is particularly devoted to the considerations on the output constraint. The effectiveness of the controller is shown by application of the method to the power tracking control of Korea Multipurpose Research Reactor (KMRR). The simulation results show robustness against modeling uncertainty and excellent performance under unknown deteriorating actuator condition. It is concluded that the nonlinear control methods developed in this thesis and based on the use of a simple uncertainty estimator and adaptation algorithms for feedback and feedforward gains provide not only robustness against modeling uncertainty but also very fast and smooth performance behavior

  17. Exploration of Disease Markers under Translational Medicine Model

    Directory of Open Access Journals (Sweden)

    Rajagopal Krishnamoorthy

    2015-06-01

    Full Text Available Disease markers are defined as the biomarkers with specific characteristics during the general physical, pathological or therapeutic process, the detection of which can inform the progression of present biological process of organisms. However, the exploration of disease markers is complicated and difficult, and only a few markers can be used in clinical practice and there is no significant difference in the mortality of cancers before and after biomarker exploration. Translational medicine focuses on breaking the blockage between basic medicine and clinical practice. In addition, it also establishes an effective association between researchers engaged on basic scientific discovery and clinical physicians well informed of patients' requirements, and gives particular attentions on how to translate the basic molecular biological research to the most effective and appropriate methods for the diagnosis, treatment and prevention of diseases, hoping to translate basic research into the new therapeutic methods in clinic. Therefore, this study mainly summarized the exploration of disease markers under translational medicine model so as to provide a basis for the translation of basic research results into clinical application.

  18. Modeling of the response under radiation of electronic dosemeters

    International Nuclear Information System (INIS)

    Menard, S.

    2003-01-01

    The simulation with with calculation codes the interactions and the transport of primary and secondary radiations in the detectors allows to reduce the number of developed prototypes and the number of experiments under radiation. The simulation makes possible the determination of the response of the instrument for exposure configurations more extended that these ones of references radiations produced in laboratories. The M.C.N.P.X. allows to transport, over the photons, electrons and neutrons, the charged particles heavier than the electrons and to simulate the radiation - matter interactions for a certain number of particles. The present paper aims to present the interest of the use of the M.C.N.P.X. code in the study, research and evaluation phases of the instrumentation necessary to the dosimetry monitoring. To do that the presentation gives the results of the modeling of a prototype of a equivalent tissue proportional counter (C.P.E.T.) and of the C.R.A.M.A.L. ( radiation protection apparatus marketed by the Eurisys Mesures society). (N.C.)

  19. Wake of a blunt planetary probe model under hypervelocity conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kastell, D.; Hannemann, D.; Eitelberg, G. [DLR Deutsches Zentrum fuer Luft- und Raumfahrt e.V., Goettingen (Germany). Inst. fuer Stroemungsmechanik

    1998-12-31

    The flow in the wake of a planetary probe under hypervelocity re-entry conditions has two idiosyncrasies not present in the conventional (cold) hypersonic flows: the strong dissociation reaction occurring behind the bow shock wave, and the freezing of the chemical reactions of the flow by the rapid expansion at the shoulder of the probe. The aim of the present study was to both understand the relative importance of the two phenomena upon the total heat and pressure loads on a planetary probe and its possible payload as well as to provide experimental validation data for those developing numerical codes for planetary probe design and analysis. For the experimental study an instrumented blunted 140 cone was tested in the High Enthalpy Shock Tunnel in Goettingen (HEG). The numerical calculations were performed with a Thin-Layer Navier-Stokes code which is capable of simulating chemical and thermal nonequilibrium flows. For the forebody loads the prediction methods were very reliable and capable of accounting for the kinetic effects caused by the high specific enthalpy of the flow. On the other side considerable discrepancies between experimental and numerical results for the wake of the model have been observed. (orig.)

  20. Wake of a blunt planetary probe model under hypervelocity conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kastell, D.; Hannemann, D.; Eitelberg, G. (DLR Deutsches Zentrum fuer Luft- und Raumfahrt e.V., Goettingen (Germany). Inst. fuer Stroemungsmechanik)

    1998-01-01

    The flow in the wake of a planetary probe under hypervelocity re-entry conditions has two idiosyncrasies not present in the conventional (cold) hypersonic flows: the strong dissociation reaction occurring behind the bow shock wave, and the freezing of the chemical reactions of the flow by the rapid expansion at the shoulder of the probe. The aim of the present study was to both understand the relative importance of the two phenomena upon the total heat and pressure loads on a planetary probe and its possible payload as well as to provide experimental validation data for those developing numerical codes for planetary probe design and analysis. For the experimental study an instrumented blunted 140 cone was tested in the High Enthalpy Shock Tunnel in Goettingen (HEG). The numerical calculations were performed with a Thin-Layer Navier-Stokes code which is capable of simulating chemical and thermal nonequilibrium flows. For the forebody loads the prediction methods were very reliable and capable of accounting for the kinetic effects caused by the high specific enthalpy of the flow. On the other side considerable discrepancies between experimental and numerical results for the wake of the model have been observed. (orig.)

  1. A unifying model of genome evolution under parsimony.

    Science.gov (United States)

    Paten, Benedict; Zerbino, Daniel R; Hickey, Glenn; Haussler, David

    2014-06-19

    Parsimony and maximum likelihood methods of phylogenetic tree estimation and parsimony methods for genome rearrangements are central to the study of genome evolution yet to date they have largely been pursued in isolation. We present a data structure called a history graph that offers a practical basis for the analysis of genome evolution. It conceptually simplifies the study of parsimonious evolutionary histories by representing both substitutions and double cut and join (DCJ) rearrangements in the presence of duplications. The problem of constructing parsimonious history graphs thus subsumes related maximum parsimony problems in the fields of phylogenetic reconstruction and genome rearrangement. We show that tractable functions can be used to define upper and lower bounds on the minimum number of substitutions and DCJ rearrangements needed to explain any history graph. These bounds become tight for a special type of unambiguous history graph called an ancestral variation graph (AVG), which constrains in its combinatorial structure the number of operations required. We finally demonstrate that for a given history graph G, a finite set of AVGs describe all parsimonious interpretations of G, and this set can be explored with a few sampling moves. This theoretical study describes a model in which the inference of genome rearrangements and phylogeny can be unified under parsimony.

  2. Contemporary assumptions on human nature and work and approach to human potential managing

    Directory of Open Access Journals (Sweden)

    Vujić Dobrila

    2006-01-01

    Full Text Available A general problem of this research is to identify if there is a relationship between the assumption on human nature and work (Mcgregor, Argyris, Schein, Steers and Porter and a general organizational model preference, as well as a mechanism of human resource management? This research was carried out in 2005/2006. The sample consisted of 317 subjects (197 managers, 105 highly educated subordinates and 15 entrepreneurs in 7 big enterprises in a group of small business enterprises differentiating in terms of the entrepreneur’s structure and a type of activity. A general hypothesis "that assumptions on human nature and work are statistically significant in connection to the preference approach (models, of work motivation commitment", has been confirmed. A specific hypothesis have been also confirmed: ·The assumptions on a human as a rational economic being are statistically significant in correlation with only two mechanisms of traditional models, the mechanism of method work control and the working discipline mechanism. ·Statistically significant assumptions on a human as a social being are correlated with all mechanisms of engaging employees, which belong to the model of the human relations, except the mechanism introducing the adequate type of prizes for all employees independently of working results. ·The assumptions on a human as a creative being are statistically significant, positively correlating with preference of two mechanisms belonging to the human resource model by investing into education and training and making conditions for the application of knowledge and skills. The young with assumptions on a human as a creative being prefer much broader repertoire of mechanisms belonging to the human resources model from the remaining category of subjects in the pattern. The connection between the assumption on human nature and preference models of engaging appears especially in the sub-pattern of managers, in the category of young subjects

  3. Corticonic models of brain mechanisms underlying cognition and intelligence

    Science.gov (United States)

    Farhat, Nabil H.

    The concern of this review is brain theory or more specifically, in its first part, a model of the cerebral cortex and the way it: (a) interacts with subcortical regions like the thalamus and the hippocampus to provide higher-level-brain functions that underlie cognition and intelligence, (b) handles and represents dynamical sensory patterns imposed by a constantly changing environment, (c) copes with the enormous number of such patterns encountered in a lifetime by means of dynamic memory that offers an immense number of stimulus-specific attractors for input patterns (stimuli) to select from, (d) selects an attractor through a process of “conjugation” of the input pattern with the dynamics of the thalamo-cortical loop, (e) distinguishes between redundant (structured) and non-redundant (random) inputs that are void of information, (f) can do categorical perception when there is access to vast associative memory laid out in the association cortex with the help of the hippocampus, and (g) makes use of “computation” at the edge of chaos and information driven annealing to achieve all this. Other features and implications of the concepts presented for the design of computational algorithms and machines with brain-like intelligence are also discussed. The material and results presented suggest, that a Parametrically Coupled Logistic Map network (PCLMN) is a minimal model of the thalamo-cortical complex and that marrying such a network to a suitable associative memory with re-entry or feedback forms a useful, albeit, abstract model of a cortical module of the brain that could facilitate building a simple artificial brain. In the second part of the review, the results of numerical simulations and drawn conclusions in the first part are linked to the most directly relevant works and views of other workers. What emerges is a picture of brain dynamics on the mesoscopic and macroscopic scales that gives a glimpse of the nature of the long sought after brain code

  4. Investigating the Assumptions of Uses and Gratifications Research

    Science.gov (United States)

    Lometti, Guy E.; And Others

    1977-01-01

    Discusses a study designed to determine empirically the gratifications sought from communication channels and to test the assumption that individuals differentiate channels based on gratifications. (MH)

  5. Legal assumptions for private company claim for additional (supplementary payment

    Directory of Open Access Journals (Sweden)

    Šogorov Stevan

    2011-01-01

    Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.

  6. Incorrect modeling of the failure process of minimally repaired systems under random conditions: The effect on the maintenance costs

    International Nuclear Information System (INIS)

    Pulcini, Gianpaolo

    2015-01-01

    This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.

  7. Issues affecting the electricity transmission system in Mexico under a competitive integrated model

    Energy Technology Data Exchange (ETDEWEB)

    Avila Rosales, M.A.; Gonzalez Flores, J. [Federal Electricity Commission, Mexico City (Mexico)

    2008-07-01

    The electricity sector in Mexico is undergoing a process of significant structural change. The traditional industry framework has been exposed to new market structures and greater competition, both of which are being introduced by changing regulations regarding who can generate, transmit, distribute and sell electricity. Mexico's power industry is changing to a competitive integrated model. Electricity industry restructuring is partly based on the assumption that transmission systems should be flexible, reliable, and open to all exchanges no matter where the suppliers and consumers of energy are located and who they are. However, neither the existing transmission systems nor its management infrastructure can fully support this open exchange. This paper described the primary issues affecting the transmission system in Mexico under a competitive environment and a transmission expansion planning approach that took the uncertainties associated with the location and size of new generating power stations into consideration in order to produce least-cost and robust transmission plans. The paper described the planning process, including a rigorous analysis of the economics of the resulting transmission plans. Specifically, the paper described the current regulatory framework and supply adequacy as well as current procedures and methodologies for transmission management and expansion planning. The transmission planning methodology was also presented. This included a minimum cost analysis; profit analysis; and least-cost transmission plan. It was concluded that the transmission expansion planning approach stressed that a horizon year viewpoint was important because transmission additions have long-term use. The transmission expansion planning approach, further defined the process of selecting transmission projects as one of comparing and optimizing attributes such as near-term needs; long-term utilization; contribution to overall reliability; and favorable or least

  8. Proton Therapy Expansion Under Current United States Reimbursement Models

    Energy Technology Data Exchange (ETDEWEB)

    Kerstiens, John [Indiana University Health Proton Therapy Center, Bloomington, Indiana (United States); Johnstone, Peter A.S., E-mail: pajohnst@iupui.edu [Indiana University Health Proton Therapy Center, Bloomington, Indiana (United States); Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, Indiana (United States)

    2014-06-01

    Purpose: To determine whether all the existing and planned proton beam therapy (PBT) centers in the United States can survive on a local patient mix that is dictated by insurers, not by number of patients. Methods and Materials: We determined current and projected cancer rates for 10 major US metropolitan areas. Using published utilization rates, we calculated patient percentages who are candidates for PBT. Then, on the basis of current published insurer coverage policies, we applied our experience of what would be covered to determine the net number of patients for whom reimbursement is expected. Having determined the net number of covered patients, we applied our average beam delivery times to determine the total number of minutes needed to treat that patient over the course of their treatment. We then calculated our expected annual patient capacity per treatment room to determine the appropriate number of treatment rooms for the area. Results: The population of patients who will be both PBT candidates and will have treatments reimbursed by insurance is significantly smaller than the population who should receive PBT. Coverage decisions made by insurers reduce the number of PBT rooms that are economically viable. Conclusions: The expansion of PBT centers in the US is not sustainable under the current reimbursement model. Viability of new centers will be limited to those operating in larger regional metropolitan areas, and few metropolitan areas in the US can support multiple centers. In general, 1-room centers require captive (non–PBT-served) populations of approximately 1,000,000 lives to be economically viable, and a large center will require a population of >4,000,000 lives. In areas with smaller populations or where or a PBT center already exists, new centers require subsidy.

  9. Addressing potential local adaptation in species distribution models: implications for conservation under climate change

    Science.gov (United States)

    Hällfors, Maria Helena; Liao, Jishan; Dzurisin, Jason D. K.; Grundel, Ralph; Hyvärinen, Marko; Towle, Kevin; Wu, Grace C.; Hellmann, Jessica J.

    2016-01-01

    Species distribution models (SDMs) have been criticized for involving assumptions that ignore or categorize many ecologically relevant factors such as dispersal ability and biotic interactions. Another potential source of model error is the assumption that species are ecologically uniform in their climatic tolerances across their range. Typically, SDMs to treat a species as a single entity, although populations of many species differ due to local adaptation or other genetic differentiation. Not taking local adaptation into account, may lead to incorrect range prediction and therefore misplaced conservation efforts. A constraint is that we often do not know the degree to which populations are locally adapted, however. Lacking experimental evidence, we still can evaluate niche differentiation within a species' range to promote better conservation decisions. We explore possible conservation implications of making type I or type II errors in this context. For each of two species, we construct three separate MaxEnt models, one considering the species as a single population and two of disjunct populations. PCA analyses and response curves indicate different climate characteristics in the current environments of the populations. Model projections into future climates indicate minimal overlap between areas predicted to be climatically suitable by the whole species versus population-based models. We present a workflow for addressing uncertainty surrounding local adaptation in SDM application and illustrate the value of conducting population-based models to compare with whole-species models. These comparisons might result in more cautious management actions when alternative range outcomes are considered.

  10. An Integrated Coral Reef Ecosystem Model to Support Resource Management under a Changing Climate.

    Science.gov (United States)

    Weijerman, Mariska; Fulton, Elizabeth A; Kaplan, Isaac C; Gorton, Rebecca; Leemans, Rik; Mooij, Wolf M; Brainard, Russell E

    2015-01-01

    Millions of people rely on the ecosystem services provided by coral reefs, but sustaining these benefits requires an understanding of how reefs and their biotic communities are affected by local human-induced disturbances and global climate change. Ecosystem-based management that explicitly considers the indirect and cumulative effects of multiple disturbances has been recommended and adopted in policies in many places around the globe. Ecosystem models give insight into complex reef dynamics and their responses to multiple disturbances and are useful tools to support planning and implementation of ecosystem-based management. We adapted the Atlantis Ecosystem Model to incorporate key dynamics for a coral reef ecosystem around Guam in the tropical western Pacific. We used this model to quantify the effects of predicted climate and ocean changes and current levels of current land-based sources of pollution (LBSP) and fishing. We used the following six ecosystem metrics as indicators of ecosystem state, resilience and harvest potential: 1) ratio of calcifying to non-calcifying benthic groups, 2) trophic level of the community, 3) biomass of apex predators, 4) biomass of herbivorous fishes, 5) total biomass of living groups and 6) the end-to-start ratio of exploited fish groups. Simulation tests of the effects of each of the three drivers separately suggest that by mid-century climate change will have the largest overall effect on this suite of ecosystem metrics due to substantial negative effects on coral cover. The effects of fishing were also important, negatively influencing five out of the six metrics. Moreover, LBSP exacerbates this effect for all metrics but not quite as badly as would be expected under additive assumptions, although the magnitude of the effects of LBSP are sensitive to uncertainty associated with primary productivity. Over longer time spans (i.e., 65 year simulations), climate change impacts have a slight positive interaction with other drivers

  11. An Integrated Coral Reef Ecosystem Model to Support Resource Management under a Changing Climate.

    Directory of Open Access Journals (Sweden)

    Mariska Weijerman

    Full Text Available Millions of people rely on the ecosystem services provided by coral reefs, but sustaining these benefits requires an understanding of how reefs and their biotic communities are affected by local human-induced disturbances and global climate change. Ecosystem-based management that explicitly considers the indirect and cumulative effects of multiple disturbances has been recommended and adopted in policies in many places around the globe. Ecosystem models give insight into complex reef dynamics and their responses to multiple disturbances and are useful tools to support planning and implementation of ecosystem-based management. We adapted the Atlantis Ecosystem Model to incorporate key dynamics for a coral reef ecosystem around Guam in the tropical western Pacific. We used this model to quantify the effects of predicted climate and ocean changes and current levels of current land-based sources of pollution (LBSP and fishing. We used the following six ecosystem metrics as indicators of ecosystem state, resilience and harvest potential: 1 ratio of calcifying to non-calcifying benthic groups, 2 trophic level of the community, 3 biomass of apex predators, 4 biomass of herbivorous fishes, 5 total biomass of living groups and 6 the end-to-start ratio of exploited fish groups. Simulation tests of the effects of each of the three drivers separately suggest that by mid-century climate change will have the largest overall effect on this suite of ecosystem metrics due to substantial negative effects on coral cover. The effects of fishing were also important, negatively influencing five out of the six metrics. Moreover, LBSP exacerbates this effect for all metrics but not quite as badly as would be expected under additive assumptions, although the magnitude of the effects of LBSP are sensitive to uncertainty associated with primary productivity. Over longer time spans (i.e., 65 year simulations, climate change impacts have a slight positive interaction with

  12. Can phenological models predict tree phenology accurately under climate change conditions?

    Science.gov (United States)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  13. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    We propose a class of finite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the ...

  14. 40 CFR 264.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... FACILITIES Financial Requirements § 264.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure, post-closure care, or... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  15. 40 CFR 261.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... Excluded Hazardous Secondary Materials § 261.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure or liability... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  16. 40 CFR 265.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ..., STORAGE, AND DISPOSAL FACILITIES Financial Requirements § 265.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  17. 40 CFR 144.66 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) UNDERGROUND INJECTION CONTROL PROGRAM Financial Responsibility: Class I Hazardous Waste Injection Wells § 144.66 State assumption of responsibility. (a) If a State either assumes legal... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State assumption of responsibility...

  18. 40 CFR 267.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... STANDARDIZED PERMIT Financial Requirements § 267.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure care or liability... 40 Protection of Environment 26 2010-07-01 2010-07-01 false State assumption of responsibility...

  19. PFP issues/assumptions development and management planning guide

    International Nuclear Information System (INIS)

    SINCLAIR, J.C.

    1999-01-01

    The PFP Issues/Assumptions Development and Management Planning Guide presents the strategy and process used for the identification, allocation, and maintenance of an Issues/Assumptions Management List for the Plutonium Finishing Plant (PFP) integrated project baseline. Revisions to this document will include, as attachments, the most recent version of the Issues/Assumptions Management List, both open and current issues/assumptions (Appendix A), and closed or historical issues/assumptions (Appendix B). This document is intended be a Project-owned management tool. As such, this document will periodically require revisions resulting from improvements of the information, processes, and techniques as now described. Revisions that suggest improved processes will only require PFP management approval

  20. Assessing and relaxing assumptions in quasi-simplex models

    NARCIS (Netherlands)

    Lugtig, Peter; Cernat, Alexandru; Uhrig, Noah; Watson, Nicole

    2014-01-01

    Panel data (repeated measures of the same individuals) has become more and more popular in research as it has a number of unique advantages such as enabling researchers to answer questions about individual change and help deal (partially) with the issues linked to causality. But this type of data

  1. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    the attributes in the database into small, usually two-dimensional distributions. We describe several optimizations that can make selectivity estimation highly efficient, and we present a complete implementation inside PostgreSQL’s query optimizer. Experimental results indicate an order of magnitude better...

  2. Direct numerical simulations of temporally developing hydrocarbon shear flames at elevated pressure: effects of the equation of state and the unity Lewis number assumption

    Science.gov (United States)

    Korucu, Ayse; Miller, Richard

    2016-11-01

    Direct numerical simulations (DNS) of temporally developing shear flames are used to investigate both equation of state (EOS) and unity-Lewis (Le) number assumption effects in hydrocarbon flames at elevated pressure. A reduced Kerosene / Air mechanism including a semi-global soot formation/oxidation model is used to study soot formation/oxidation processes in a temporarlly developing hydrocarbon shear flame operating at both atmospheric and elevated pressures for the cubic Peng-Robinson real fluid EOS. Results are compared to simulations using the ideal gas law (IGL). The results show that while the unity-Le number assumption with the IGL EOS under-predicts the flame temperature for all pressures, with the real fluid EOS it under-predicts the flame temperature for 1 and 35 atm and over-predicts the rest. The soot mass fraction, Ys, is only under-predicted for the 1 atm flame for both IGL and real gas fluid EOS models. While Ys is over-predicted for elevated pressures with IGL EOS, for the real gas EOS Ys's predictions are similar to results using a non-unity Le model derived from non-equilibrium thermodynamics and real diffusivities. Adopting the unity Le assumption is shown to cause misprediction of Ys, the flame temperature, and the mass fractions of CO, H and OH.

  3. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  4. Reservoir management under geological uncertainty using fast model update

    NARCIS (Netherlands)

    Hanea, R.; Evensen, G.; Hustoft, L.; Ek, T.; Chitu, A.; Wilschut, F.

    2015-01-01

    Statoil is implementing "Fast Model Update (FMU)," an integrated and automated workflow for reservoir modeling and characterization. FMU connects all steps and disciplines from seismic depth conversion to prediction and reservoir management taking into account relevant reservoir uncertainty. FMU

  5. A Duopoly Manufacturers’ Game Model Considering Green Technology Investment under a Cap-and-Trade System

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    2018-03-01

    Full Text Available This research studied the duopoly manufacturers’ decision-making considering green technology investment and under a cap-and-trade system. It was assumed there were two manufacturers producing products which were substitutable for one another. On the basis of this assumption, the optimal production capacity, price, and green technology investment of the duopoly manufacturers under a cap-and-trade system were obtained. The increase or decrease of the optimal production quantity of the duopoly manufacturers under a cap-and-trade system was decided by their green technology level. The increase of the optimal price as well as the increase or decrease of the maximum expected profits were decided by the initial carbon emission quota granted by the government. Our research indicates that the carbon emission of unit product is inversely proportional to the market share of an enterprise and becomes an important index to measure the core competitiveness of an enterprise.

  6. Validation of spectral gas radiation models under oxyfuel conditions

    Energy Technology Data Exchange (ETDEWEB)

    Becher, Johann Valentin

    2013-05-15

    Combustion of hydrocarbon fuels with pure oxygen results in a different flue gas composition than combustion with air. Standard computational-fluid-dynamics (CFD) spectral gas radiation models for air combustion are therefore out of their validity range in oxyfuel combustion. This thesis provides a common spectral basis for the validation of new spectral models. A literature review about fundamental gas radiation theory, spectral modeling and experimental methods provides the reader with a basic understanding of the topic. In the first results section, this thesis validates detailed spectral models with high resolution spectral measurements in a gas cell with the aim of recommending one model as the best benchmark model. In the second results section, spectral measurements from a turbulent natural gas flame - as an example for a technical combustion process - are compared to simulated spectra based on measured gas atmospheres. The third results section compares simplified spectral models to the benchmark model recommended in the first results section and gives a ranking of the proposed models based on their accuracy. A concluding section gives recommendations for the selection and further development of simplified spectral radiation models. Gas cell transmissivity spectra in the spectral range of 2.4 - 5.4 {mu}m of water vapor and carbon dioxide in the temperature range from 727 C to 1500 C and at different concentrations were compared in the first results section at a nominal resolution of 32 cm{sup -1} to line-by-line models from different databases, two statistical-narrow-band models and the exponential-wide-band model. The two statistical-narrow-band models EM2C and RADCAL showed good agreement with a maximal band transmissivity deviation of 3 %. The exponential-wide-band model showed a deviation of 6 %. The new line-by-line database HITEMP2010 had the lowest band transmissivity deviation of 2.2% and was therefore recommended as a reference model for the

  7. Mathematical Modeling of Wastewater Oxidation under Microgravity Conditions

    OpenAIRE

    Boyun Guo; Donald W. Holder; David S. Schechter

    2005-01-01

    Volatile removal assembly (VRA) is a module installed in the International Space Station for removing contaminants (volatile organics) in the wastewater produced by the crew. The VRA contains a slim pack bed reactor to perform catalyst oxidation of the wastewater at elevated pressure and temperature under microgravity conditions. Optimal design of the reactor requires a thorough understanding about how the reactor performs under microgravity conditions. The objective of this study was to theo...

  8. Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.

    Science.gov (United States)

    Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A

    2017-04-01

    The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Numerical Simulation of the Heston Model under Stochastic Correlation

    Directory of Open Access Journals (Sweden)

    Long Teng

    2017-12-01

    Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.

  10. Questionable assumptions hampered interpretation of a network meta-analysis of primary care depression treatments.

    Science.gov (United States)

    Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente

    2016-03-01

    We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. On Some Unwarranted Tacit Assumptions in Cognitive Neuroscience†

    Science.gov (United States)

    Mausfeld, Rainer

    2011-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input–output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings. PMID:22435062

  12. On some unwarranted tacit assumptions in cognitive neuroscience.

    Science.gov (United States)

    Mausfeld, Rainer

    2012-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input-output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings.

  13. Are waves of relational assumptions eroding traditional analysis?

    Science.gov (United States)

    Meredith-Owen, William

    2013-11-01

    The author designates as 'traditional' those elements of psychoanalytic presumption and practice that have, in the wake of Fordham's legacy, helped to inform analytical psychology and expand our capacity to integrate the shadow. It is argued that this element of the broad spectrum of Jungian practice is in danger of erosion by the underlying assumptions of the relational approach, which is fast becoming the new establishment. If the maps of the traditional landscape of symbolic reference (primal scene, Oedipus et al.) are disregarded, analysts are left with only their own self-appointed authority with which to orientate themselves. This self-centric epistemological basis of the relationalists leads to a revision of 'analytic attitude' that may be therapeutic but is not essentially analytic. This theme is linked to the perennial challenge of balancing differentiation and merger and traced back, through Chasseguet-Smirgel, to its roots in Genesis. An endeavour is made to illustrate this within the Journal convention of clinically based discussion through a commentary on Colman's (2013) avowedly relational treatment of the case material presented in his recent Journal paper 'Reflections on knowledge and experience' and through an assessment of Jessica Benjamin's (2004) relational critique of Ron Britton's (1989) transference embodied approach. © 2013, The Society of Analytical Psychology.

  14. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  15. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  16. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  17. Retail Price Model

    Science.gov (United States)

    The Retail Price Model is a tool to estimate the average retail electricity prices - under both competitive and regulated market structures - using power sector projections and assumptions from the Energy Information Administration.

  18. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    Science.gov (United States)

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  19. Response of a DSNP pressurizer model under accident conditions

    International Nuclear Information System (INIS)

    Saphier, D.; Kallfelz, J.; Belblidia, L.

    1986-01-01

    Recently a new pressurizer model was developed for the DSNP simulation language. The model was connected to a simulation of the Trojan pressurized water reactor (PWR) and tested by simulating a loss-of-off-site power (LOSP) anticipated transient without scram. The results compare well to a similar study performed using the RELAP code. The pressurizer model and its response to the LOSP accident are presented

  20. Marking and Moderation in the UK: False Assumptions and Wasted Resources

    Science.gov (United States)

    Bloxham, Sue

    2009-01-01

    This article challenges a number of assumptions underlying marking of student work in British universities. It argues that, in developing rigorous moderation procedures, we have created a huge burden for markers which adds little to accuracy and reliability but creates additional work for staff, constrains assessment choices and slows down…

  1. A volume flexible inventory model with trapezoidal demand under inflation

    Directory of Open Access Journals (Sweden)

    kapil mehrotra

    2014-02-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Abstract   This article experiment. Further, the effects of different parameters are analysed by performing sensitivity analyses on the optimal policy. explores an economic production quantity model (EPQ model for deteriorating items with time-dependent demand following trapezoidal pattern taking the volume flexibility into account. We have also considered the inflation and time value of money. The solution of the model aims at determining the optimal production run-time in order to maximize the profit. The model is also illustrated by means of numerical

  2. Modeling delamination of FRP laminates under low velocity impact

    Science.gov (United States)

    Jiang, Z.; Wen, H. M.; Ren, S. L.

    2017-09-01

    Fiber reinforced plastic laminates (FRP) have been increasingly used in various engineering such as aeronautics, astronautics, transportation, naval architecture and their impact response and failure are a major concern in academic community. A new numerical model is suggested for fiber reinforced plastic composites. The model considers that FRP laminates has been constituted by unidirectional laminated plates with adhesive layers. A modified adhesive layer damage model that considering strain rate effects is incorporated into the ABAQUS / EXPLICIT finite element program by the user-defined material subroutine VUMAT. It transpires that the present model predicted delamination is in good agreement with the experimental results for low velocity impact.

  3. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  4. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  5. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    International Nuclear Information System (INIS)

    R.E. Sweeney

    2001-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  6. Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document

    International Nuclear Information System (INIS)

    Sweeney, R.

    2000-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  7. Uncertainties in predicting rice yield by current crop models under a wide range of climatic conditions

    NARCIS (Netherlands)

    Li, T.; Hasegawa, T.; Yin, X.; Zhu, Y.; Boote, K.; Adam, M.; Bregaglio, S.; Buis, S.; Confalonieri, R.; Fumoto, T.; Gaydon, D.; Marcaida III, M.; Nakagawa, H.; Oriol, P.; Ruane, A.C.; Ruget, F.; Singh, B.; Singh, U.; Tang, L.; Yoshida, H.; Zhang, Z.; Bouman, B.

    2015-01-01

    Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We

  8. Modeling detour behavior of pedestrian dynamics under different conditions

    Science.gov (United States)

    Qu, Yunchao; Xiao, Yao; Wu, Jianjun; Tang, Tao; Gao, Ziyou

    2018-02-01

    Pedestrian simulation approach has been widely used to reveal the human behavior and evaluate the performance of crowd evacuation. In the existing pedestrian simulation models, the social force model is capable of predicting many collective phenomena. Detour behavior occurs in many cases, and the important behavior is a dominate factor of the crowd evacuation efficiency. However, limited attention has been attracted for analyzing and modeling the characteristics of detour behavior. In this paper, a modified social force model integrated by Voronoi diagram is proposed to calculate the detour direction and preferred velocity. Besides, with the consideration of locations and velocities of neighbor pedestrians, a Logit-based choice model is built to describe the detour direction choice. The proposed model is applied to analyze pedestrian dynamics in a corridor scenario with either unidirectional or bidirectional flow, and a building scenario in real-world. Simulation results show that the modified social force model including detour behavior could reduce the frequency of collision and deadlock, increase the average speed of the crowd, and predict more practical crowd dynamics with detour behavior. This model can also be potentially applied to understand the pedestrian dynamics and design emergent management strategies for crowd evacuations.

  9. A model for optimization of process integration investments under uncertainty

    International Nuclear Information System (INIS)

    Svensson, Elin; Stroemberg, Ann-Brith; Patriksson, Michael

    2011-01-01

    The long-term economic outcome of energy-related industrial investment projects is difficult to evaluate because of uncertain energy market conditions. In this article, a general, multistage, stochastic programming model for the optimization of investments in process integration and industrial energy technologies is proposed. The problem is formulated as a mixed-binary linear programming model where uncertainties are modelled using a scenario-based approach. The objective is to maximize the expected net present value of the investments which enables heat savings and decreased energy imports or increased energy exports at an industrial plant. The proposed modelling approach enables a long-term planning of industrial, energy-related investments through the simultaneous optimization of immediate and later decisions. The stochastic programming approach is also suitable for modelling what is possibly complex process integration constraints. The general model formulation presented here is a suitable basis for more specialized case studies dealing with optimization of investments in energy efficiency. -- Highlights: → Stochastic programming approach to long-term planning of process integration investments. → Extensive mathematical model formulation. → Multi-stage investment decisions and scenario-based modelling of uncertain energy prices. → Results illustrate how investments made now affect later investment and operation opportunities. → Approach for evaluation of robustness with respect to variations in probability distribution.

  10. Parabolic Free Boundary Price Formation Models Under Market Size Fluctuations

    KAUST Repository

    Markowich, Peter A.; Teichmann, Josef; Wolfram, Marie Therese

    2016-01-01

    In this paper we propose an extension of the Lasry-Lions price formation model which includes uctuations of the numbers of buyers and vendors. We analyze the model in the case of deterministic and stochastic market size uctuations and present

  11. A model for cooling systems analysis under natural convection

    International Nuclear Information System (INIS)

    Santos, S.J. dos.

    1988-01-01

    The present work analyses thermosyphons and their non dimensional numbers. The mathematical model considers constant pressure, single-phase incompressible flow. It simulates both open and closed thermosyphons, and deals with heat sources like PWR cores of electrical heaters and cold sinks like heat exchangers or reservoirs. A computer code named STRATS was developed based on this model. (author)

  12. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  13. Mathematical Modeling of Column-Base Connections under Monotonic Loading

    Directory of Open Access Journals (Sweden)

    Gholamreza Abdollahzadeh

    2014-12-01

    Full Text Available Some considerable damage to steel structures during the Hyogo-ken Nanbu Earthquake occurred. Among them, many exposed-type column bases failed in several consistent patterns, such as brittle base plate fracture, excessive bolt elongation, unexpected early bolt failure, and inferior construction work, etc. The lessons from these phenomena led to the need for improved understanding of column base behavior. Joint behavior must be modeled when analyzing semi-rigid frames, which is associated with a mathematical model of the moment–rotation curve. The most accurate model uses continuous nonlinear functions. This article presents three areas of steel joint research: (1 analysis methods of semi-rigid joints; (2 prediction methods for the mechanical behavior of joints; (3 mathematical representations of the moment–rotation curve. In the current study, a new exponential model to depict the moment–rotation relationship of column base connection is proposed. The proposed nonlinear model represents an approach to the prediction of M–θ curves, taking into account the possible failure modes and the deformation characteristics of the connection elements. The new model has three physical parameters, along with two curve-fitted factors. These physical parameters are generated from dimensional details of the connection, as well as the material properties. The M–θ curves obtained by the model are compared with published connection tests and 3D FEM research. The proposed mathematical model adequately comes close to characterizing M–θ behavior through the full range of loading/rotations. As a result, modeling of column base connections using the proposed mathematical model can give crucial beforehand information, and overcome the disadvantages of time consuming workmanship and cost of experimental studies.

  14. Modelling ship operational reliability over a mission under regular inspections

    NARCIS (Netherlands)

    Christer, A.H.; Lee, S.K.

    1997-01-01

    A ship is required to operate for a fixed mission period. Should a critical item of equipment fail at sea, the ship is subject to a costly event with potentially high risk to ship and crew. Given warning of a pending defect, the ship can try to return to port under its own power and thus attempt to

  15. DRUM: a new framework for metabolic modeling under non-balanced growth. Application to the carbon metabolism of unicellular microalgae.

    Science.gov (United States)

    Baroukh, Caroline; Muñoz-Tamayo, Rafael; Steyer, Jean-Philippe; Bernard, Olivier

    2014-01-01

    Metabolic modeling is a powerful tool to understand, predict and optimize bioprocesses, particularly when they imply intracellular molecules of interest. Unfortunately, the use of metabolic models for time varying metabolic fluxes is hampered by the lack of experimental data required to define and calibrate the kinetic reaction rates of the metabolic pathways. For this reason, metabolic models are often used under the balanced growth hypothesis. However, for some processes such as the photoautotrophic metabolism of microalgae, the balanced-growth assumption appears to be unreasonable because of the synchronization of their circadian cycle on the daily light. Yet, understanding microalgae metabolism is necessary to optimize the production yield of bioprocesses based on this microorganism, as for example production of third-generation biofuels. In this paper, we propose DRUM, a new dynamic metabolic modeling framework that handles the non-balanced growth condition and hence accumulation of intracellular metabolites. The first stage of the approach consists in splitting the metabolic network into sub-networks describing reactions which are spatially close, and which are assumed to satisfy balanced growth condition. The left metabolites interconnecting the sub-networks behave dynamically. Then, thanks to Elementary Flux Mode analysis, each sub-network is reduced to macroscopic reactions, for which simple kinetics are assumed. Finally, an Ordinary Differential Equation system is obtained to describe substrate consumption, biomass production, products excretion and accumulation of some internal metabolites. DRUM was applied to the accumulation of lipids and carbohydrates of the microalgae Tisochrysis lutea under day/night cycles. The resulting model describes accurately experimental data obtained in day/night conditions. It efficiently predicts the accumulation and consumption of lipids and carbohydrates.

  16. DRUM: a new framework for metabolic modeling under non-balanced growth. Application to the carbon metabolism of unicellular microalgae.

    Directory of Open Access Journals (Sweden)

    Caroline Baroukh

    Full Text Available Metabolic modeling is a powerful tool to understand, predict and optimize bioprocesses, particularly when they imply intracellular molecules of interest. Unfortunately, the use of metabolic models for time varying metabolic fluxes is hampered by the lack of experimental data required to define and calibrate the kinetic reaction rates of the metabolic pathways. For this reason, metabolic models are often used under the balanced growth hypothesis. However, for some processes such as the photoautotrophic metabolism of microalgae, the balanced-growth assumption appears to be unreasonable because of the synchronization of their circadian cycle on the daily light. Yet, understanding microalgae metabolism is necessary to optimize the production yield of bioprocesses based on this microorganism, as for example production of third-generation biofuels. In this paper, we propose DRUM, a new dynamic metabolic modeling framework that handles the non-balanced growth condition and hence accumulation of intracellular metabolites. The first stage of the approach consists in splitting the metabolic network into sub-networks describing reactions which are spatially close, and which are assumed to satisfy balanced growth condition. The left metabolites interconnecting the sub-networks behave dynamically. Then, thanks to Elementary Flux Mode analysis, each sub-network is reduced to macroscopic reactions, for which simple kinetics are assumed. Finally, an Ordinary Differential Equation system is obtained to describe substrate consumption, biomass production, products excretion and accumulation of some internal metabolites. DRUM was applied to the accumulation of lipids and carbohydrates of the microalgae Tisochrysis lutea under day/night cycles. The resulting model describes accurately experimental data obtained in day/night conditions. It efficiently predicts the accumulation and consumption of lipids and carbohydrates.

  17. Nanostructure evolution under irradiation of Fe(C)MnNi model alloys for reactor pressure vessel steels

    Energy Technology Data Exchange (ETDEWEB)

    Chiapetto, M., E-mail: mchiapet@sckcen.be [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium); Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Becquart, C.S. [Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Laboratoire commun EDF-CNRS Etude et Modélisation des Microstructures pour le Vieillissement des Matériaux (EM2VM) (France); Domain, C. [EDF R& D, Département Matériaux et Mécanique des Composants, Les Renardières, F-77250 Moret sur Loing (France); Laboratoire commun EDF-CNRS Etude et Modélisation des Microstructures pour le Vieillissement des Matériaux (EM2VM) (France); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium)

    2015-06-01

    Radiation-induced embrittlement of bainitic steels is one of the most important lifetime limiting factors of existing nuclear light water reactor pressure vessels. The primary mechanism of embrittlement is the obstruction of dislocation motion produced by nanometric defect structures that develop in the bulk of the material due to irradiation. The development of models that describe, based on physical mechanisms, the nanostructural changes in these types of materials due to neutron irradiation are expected to help to better understand which features are mainly responsible for embrittlement. The chemical elements that are thought to influence most the response under irradiation of low-Cu RPV steels, especially at high fluence, are Ni and Mn, hence there is an interest in modelling the nanostructure evolution in irradiated FeMnNi alloys. As a first step in this direction, we developed sets of parameters for object kinetic Monte Carlo (OKMC) simulations that allow this to be done, under simplifying assumptions, using a “grey alloy” approach that extends the already existing OKMC model for neutron irradiated Fe–C binary alloys [1]. Our model proved to be able to describe the trend in the buildup of irradiation defect populations at the operational temperature of LWR (∼300 °C), in terms of both density and size distribution of the defect cluster populations, in FeMnNi model alloys as compared to Fe–C. In particular, the reduction of the mobility of point-defect clusters as a consequence of the presence of solutes proves to be key to explain the experimentally observed disappearance of detectable point-defect clusters with increasing solute content.

  18. Propulsion Physics Under the Changing Density Field Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  19. Modeling U.S. water resources under climate change

    Science.gov (United States)

    Blanc, Elodie; Strzepek, Kenneth; Schlosser, Adam; Jacoby, Henry; Gueneau, Arthur; Fant, Charles; Rausch, Sebastian; Reilly, John

    2014-04-01

    Water is at the center of a complex and dynamic system involving climatic, biological, hydrological, physical, and human interactions. We demonstrate a new modeling system that integrates climatic and hydrological determinants of water supply with economic and biological drivers of sectoral and regional water requirement while taking into account constraints of engineered water storage and transport systems. This modeling system is an extension of the Massachusetts Institute of Technology (MIT) Integrated Global System Model framework and is unique in its consistent treatment of factors affecting water resources and water requirements. Irrigation demand, for example, is driven by the same climatic conditions that drive evapotranspiration in natural systems and runoff, and future scenarios of water demand for power plant cooling are consistent with energy scenarios driving climate change. To illustrate the modeling system we select "wet" and "dry" patterns of precipitation for the United States from general circulation models used in the Climate Model Intercomparison Project (CMIP3). Results suggest that population and economic growth alone would increase water stress in the United States through mid-century. Climate change generally increases water stress with the largest increases in the Southwest. By identifying areas of potential stress in the absence of specific adaptation responses, the modeling system can help direct attention to water planning that might then limit use or add storage in potentially stressed regions, while illustrating how avoiding climate change through mitigation could change likely outcomes.

  20. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    Science.gov (United States)

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  1. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    Science.gov (United States)

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A particle model of rolling grain ripples under waves

    DEFF Research Database (Denmark)

    Andersen, Ken Haste

    2001-01-01

    A simple model for the formation of rolling grain ripples on a flat sand bed by the oscillatory flow generated by a surface wave is presented. An equation of motion is derived for the individual ripples, seen as "particles," on the otherwise flat bed. The model accounts for the initial appearance...... of the ripples, the subsequent coarsening of the ripples, and the final equilibrium state. The model is related to the physical parameters of the problem, and an analytical approximation for the equilibrium spacing of the ripples is developed. It is found that the spacing between the ripples scales...

  3. Supporting calculations and assumptions for use in WESF safetyanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  4. Psychopatholgy, fundamental assumptions and CD-4 T lymphocyte ...

    African Journals Online (AJOL)

    In addition, we explored whether psychopathology and negative fundamental assumptions in ... Method: Self-rating questionnaires to assess depressive symptoms, ... associated with all participants scoring in the positive range of the FA scale.

  5. The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.

    Science.gov (United States)

    Meindl, Peter; Johnson, Kate M; Graham, Jesse

    2016-04-01

    Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. © 2016 by the Society for Personality and Social Psychology, Inc.

  6. Uniform background assumption produces misleading lung EIT images.

    Science.gov (United States)

    Grychtol, Bartłomiej; Adler, Andy

    2013-06-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes.

  7. Uniform background assumption produces misleading lung EIT images

    International Nuclear Information System (INIS)

    Grychtol, Bartłomiej; Adler, Andy

    2013-01-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes. (paper)

  8. Assumptions of the primordial spectrum and cosmological parameter estimation

    International Nuclear Information System (INIS)

    Shafieloo, Arman; Souradeep, Tarun

    2011-01-01

    The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. (paper)

  9. Fourth-order structural steganalysis and analysis of cover assumptions

    Science.gov (United States)

    Ker, Andrew D.

    2006-02-01

    We extend our previous work on structural steganalysis of LSB replacement in digital images, building detectors which analyse the effect of LSB operations on pixel groups as large as four. Some of the method previously applied to triplets of pixels carries over straightforwardly. However we discover new complexities in the specification of a cover image model, a key component of the detector. There are many reasonable symmetry assumptions which we can make about parity and structure in natural images, only some of which provide detection of steganography, and the challenge is to identify the symmetries a) completely, and b) concisely. We give a list of possible symmetries and then reduce them to a complete, non-redundant, and approximately independent set. Some experimental results suggest that all useful symmetries are thus described. A weighting is proposed and its approximate variance stabilisation verified empirically. Finally, we apply symmetries to create a novel quadruples detector for LSB replacement steganography. Experimental results show some improvement, in most cases, over other detectors. However the gain in performance is moderate compared with the increased complexity in the detection algorithm, and we suggest that, without new insight, further extension of structural steganalysis may provide diminishing returns.

  10. Calibration of CORSIM models under saturated traffic flow conditions.

    Science.gov (United States)

    2013-09-01

    This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....

  11. A transient single particle model under FCI conditions

    Institute of Scientific and Technical Information of China (English)

    LI Xiao-Yan; SHANG Zhi; XU Ji-Jun

    2005-01-01

    The paper is focused on the coupling effect between film boiling heat transfer and evaporation drag around a hot-particle in cold liquid. Based on the continuity, momentum and energy equations of the vapor film, a transient two-dimensional single particle model has been established. This paper contains a detailed description of HPMC (High-temperature Particle Moving in Coolant) model for studying some aspects of the premixing stage of fuel-coolant interactions (FCIs). The transient process of high-temperature particles moving in coolant can be simulated. Comparisons between the experiment results and the calculations using HPMC model demonstrate that HPMC model achieves a good agreement in predicting the time-varying characteristic of high-temperature spheres moving in coolant.

  12. Sandpile models with and without an underlying spatial structure

    International Nuclear Information System (INIS)

    Christensen, K.; Olami, Z.

    1993-01-01

    We present a simple mean-field model for the sandpile model introduced by Bak, Tang, and Wiesenfeld (BTW) [Phys. Rev. Lett. 59, 381 (1987)]. In the mean-field model we are able to pinpoint the process of self-organization as well as the emerging scale invariance displayed as a power-law distribution of avalanche sizes. We discuss the BTW sandpile model on a lattice and show that the dynamical behavior can be expressed as a transport problem. This implies that the average avalanche size scales with the system size, and additional heuristic arguments related to the transport properties more than indicate the origin of the power-law behavior. We review recent work in which scaling relations and additional constraints between the various critical exponents are addressed. We demonstrate that some of the proposed relations are inconsistent. We present a coherent ''theory'' in which the scaling relations along with additional constraints leave only one exponent unknown

  13. Model Justified Search Algorithms for Scheduling Under Uncertainty

    National Research Council Canada - National Science Library

    Howe, Adele; Whitley, L. D

    2008-01-01

    .... We also identified plateaus as a significant barrier to superb performance of local search on scheduling and have studied several canonical discrete optimization problems to discover and model the nature of plateaus...

  14. Modelling Dominance Hierarchies Under Winner and Loser Effects.

    Science.gov (United States)

    Kura, Klodeta; Broom, Mark; Kandler, Anne

    2015-06-01

    Animals that live in groups commonly form themselves into dominance hierarchies which are used to allocate important resources such as access to mating opportunities and food. In this paper, we develop a model of dominance hierarchy formation based upon the concept of winner and loser effects using a simulation-based model and consider the linearity of our hierarchy using existing and new statistical measures. Two models are analysed: when each individual in a group does not know the real ability of their opponents to win a fight and when they can estimate their opponents' ability every time they fight. This estimation may be accurate or fall within an error bound. For both models, we investigate if we can achieve hierarchy linearity, and if so, when it is established. We are particularly interested in the question of how many fights are necessary to establish a dominance hierarchy.

  15. Finite element modeling of Balsa wood structures under severe loadings

    International Nuclear Information System (INIS)

    Toson, B.; Pesque, J.J.; Viot, P.

    2014-01-01

    In order to compute, in various situations, the requirements for transporting packages using Balsa wood as an energy absorber, a constitutive model is needed that takes into account all of the specific characteristics of the wood, such as its anisotropy, compressibility, softening, densification, and strain rate dependence. Such a model must also include the treatment of rupture of the wood when it is in traction. The complete description of wood behavior is not sufficient: robustness is also necessary because this model has to work in presence of large deformations and of many other external nonlinear phenomena in the surrounding structures. We propose such a constitutive model that we have developed using the commercial finite element package ABAQUS. The necessary data were acquired through an extensive compilation of the existing literature with the augmentation of personal measurements. Numerous validation tests are presented that represent different impact situations that a transportation cask might endure. (authors)

  16. Idaho National Engineering Laboratory installation roadmap assumptions document

    International Nuclear Information System (INIS)

    1993-05-01

    This document is a composite of roadmap assumptions developed for the Idaho National Engineering Laboratory (INEL) by the US Department of Energy Idaho Field Office and subcontractor personnel as a key element in the implementation of the Roadmap Methodology for the INEL Site. The development and identification of these assumptions in an important factor in planning basis development and establishes the planning baseline for all subsequent roadmap analysis at the INEL

  17. A Scalable Heuristic for Viral Marketing Under the Tipping Model

    Science.gov (United States)

    2013-09-01

    Flixster is a social media website that allows users to share reviews and other information about cinema . [35] It was extracted in Dec. 2010. – FourSquare...work of Reichman were developed independently . We also note that Reichman performs no experimental evaluation of the algorithm. A Scalable Heuristic...other dif- fusion models, such as the independent cascade model [21] and evolutionary graph theory [25] as well as probabilistic variants of the

  18. Formal modeling of a system of chemical reactions under uncertainty.

    Science.gov (United States)

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  19. Parabolic Free Boundary Price Formation Models Under Market Size Fluctuations

    KAUST Repository

    Markowich, Peter A.

    2016-10-04

    In this paper we propose an extension of the Lasry-Lions price formation model which includes uctuations of the numbers of buyers and vendors. We analyze the model in the case of deterministic and stochastic market size uctuations and present results on the long time asymptotic behavior and numerical evidence and conjectures on periodic, almost periodic, and stochastic uctuations. The numerical simulations extend the theoretical statements and give further insights into price formation dynamics.

  20. MATHEMATICAL MODELLING OF AIRCRAFT PILOTING PROSSESS UNDER SPECIFIED FLIGHT PATH

    Directory of Open Access Journals (Sweden)

    И. Кузнецов

    2012-04-01

    Full Text Available The author suggests mathematical model of pilot’s activity as follow up system and mathematical methods of pilot’s activity description. The main idea of the model is flight path forming and aircraft stabilization on it during instrument flight. Input of given follow up system is offered to be aircraft deflection from given path observed by pilot by means of sight and output is offered to be pilot’s regulating actions for aircraft stabilization on flight path.

  1. Reflood modeling under oscillatory flow conditions with Cathare

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, J M; Bartak, J; Janicot, A

    1994-12-31

    The problems and the current status in oscillatory reflood modelling with the CATHARE code are presented. The physical models used in CATHARE for reflood modelling predicted globally very well the forced reflood experiments. Significant drawbacks existed in predicting experiments with oscillatory flow (both forced and gravity driven). First, the more simple case of forced flow oscillations was analyzed. Modelling improvements within the reflooding package resolved the problem of quench front blockages and unphysical oscillations. Good agreements with experiment for the ERSEC forced oscillations reflood tests is now obtained. For gravity driven reflood, CATHARE predicted sustained flow oscillations during 100-150 s after the start of the reflood, whereas in the experiment flow oscillations were observed only during 25-30 s. Possible areas of modeling improvements are identified and several new correlations are suggested. The first test calculations of the BETHSY test 6.7A4 have shown that the oscillations are mostly sensitive to heat flux modeling downstream of the quench front. A much better agreement between CATHARE results and the experiment was obtained. However, further effort is necessary to obtain globally satisfactory predictions of gravity driven system reflood tests. (authors) 6 figs., 35 refs.

  2. Reflood modeling under oscillatory flow conditions with Cathare

    International Nuclear Information System (INIS)

    Kelly, J.M.; Bartak, J.; Janicot, A.

    1993-01-01

    The problems and the current status in oscillatory reflood modelling with the CATHARE code are presented. The physical models used in CATHARE for reflood modelling predicted globally very well the forced reflood experiments. Significant drawbacks existed in predicting experiments with oscillatory flow (both forced and gravity driven). First, the more simple case of forced flow oscillations was analyzed. Modelling improvements within the reflooding package resolved the problem of quench front blockages and unphysical oscillations. Good agreements with experiment for the ERSEC forced oscillations reflood tests is now obtained. For gravity driven reflood, CATHARE predicted sustained flow oscillations during 100-150 s after the start of the reflood, whereas in the experiment flow oscillations were observed only during 25-30 s. Possible areas of modeling improvements are identified and several new correlations are suggested. The first test calculations of the BETHSY test 6.7A4 have shown that the oscillations are mostly sensitive to heat flux modeling downstream of the quench front. A much better agreement between CATHARE results and the experiment was obtained. However, further effort is necessary to obtain globally satisfactory predictions of gravity driven system reflood tests. (authors) 6 figs., 35 refs

  3. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    Science.gov (United States)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  4. Mathematic modelling of circular cylinder deformation under inner grouwth

    Directory of Open Access Journals (Sweden)

    A. V. Siasiev

    2009-09-01

    Full Text Available A task on the intensive deformed state (IDS of a viscoelastic declivous cylinder, which is grown under the action of inner pressure, is considered. The process of continuous increase takes a place on an internal radius so, that a radius and pressure change on set to the given law. The special case of linear law of creeping is considered, and also numeral results are presented as the graphs of temporal dependence of tensions and moving for different points of cylinder.

  5. Oil price assumptions in macroeconomic forecasts: should we follow future market expectations?

    International Nuclear Information System (INIS)

    Coimbra, C.; Esteves, P.S.

    2004-01-01

    In macroeconomic forecasting, in spite of its important role in price and activity developments, oil prices are usually taken as an exogenous variable, for which assumptions have to be made. This paper evaluates the forecasting performance of futures market prices against the other popular technical procedure, the carry-over assumption. The results suggest that there is almost no difference between opting for futures market prices or using the carry-over assumption for short-term forecasting horizons (up to 12 months), while, for longer-term horizons, they favour the use of futures market prices. However, as futures market prices reflect market expectations for world economic activity, futures oil prices should be adjusted whenever market expectations for world economic growth are different to the values underlying the macroeconomic scenarios, in order to fully ensure the internal consistency of those scenarios. (Author)

  6. Calibration under uncertainty for finite element models of masonry monuments

    Energy Technology Data Exchange (ETDEWEB)

    Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin

    2010-02-01

    Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, and there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.

  7. Modeling of a bioethanol combustion engine under different operating conditions

    International Nuclear Information System (INIS)

    Hedfi, Hachem; Jedli, Hedi; Jbara, Abdessalem; Slimi, Khalifa

    2014-01-01

    Highlights: • Bioethanol/gasoline blends’ fuel effects on engine’s efficiency, CO and NOx emissions. • Fuel consumption and EGR optimizations with respect to estimated engine’s work. • Ignition timing and blends’ effects on engine’s efficiency. • Rich mixture, gasoline/bioethanol blends and EGR effects on engine’s efficiency. - Abstract: A physical model based on a thermodynamic analysis was designed to characterize the combustion reaction parameters. The time-variations of pressure and temperature required for the calculation of specific heat ratio are obtained from the solution of energy conservation equation. The chemical combustion of biofuel is modeled by an overall reaction in two-steps. The rich mixture and EGR were varied to obtain the optimum operating conditions for the engine. The NOx formation is modeled by using an eight-species six-step mechanism. The effect of various formation steps of NOx in combustion is considered via a phenomenological model of combustion speed. This simplified model, which has been validated by the most available published results, is used to characterize and control, in real time, the impact of biofuel on engine performances and NOx emissions as well. It has been demonstrated that a delay of the ignition timing leads to an increase of the gas mixture temperature and cylinder pressure. Furthermore, it has been found that the CO is lower near the stoichiometry. Nevertheless, we notice that lower rich mixture values result in small NOx emission rates

  8. I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work

    Science.gov (United States)

    Horodyskyj, L.; Mead, C.; Anbar, A. D.

    2016-12-01

    Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.

  9. Thermodynamically consistent model of brittle oil shales under overpressure

    Science.gov (United States)

    Izvekov, Oleg

    2016-04-01

    The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.

  10. Behavioural modelling of irrigation decision making under water scarcity

    Science.gov (United States)

    Foster, T.; Brozovic, N.; Butler, A. P.

    2013-12-01

    Providing effective policy solutions to aquifer depletion caused by abstraction for irrigation is a key challenge for socio-hydrology. However, most crop production functions used in hydrological models do not capture the intraseasonal nature of irrigation planning, or the importance of well yield in land and water use decisions. Here we develop a method for determining stochastic intraseasonal water use that is based on observed farmer behaviour but is also theoretically consistent with dynamically optimal decision making. We use the model to (i) analyse the joint land and water use decision by farmers; (ii) to assess changes in behaviour and production risk in response to water scarcity; and (iii) to understand the limits of applicability of current methods in policy design. We develop a biophysical model of water-limited crop yield building on the AquaCrop model. The model is calibrated and applied to case studies of irrigated corn production in Nebraska and Texas. We run the model iteratively, using long-term climate records, to define two formulations of the crop-water production function: (i) the aggregate relationship between total seasonal irrigation and yield (typical of current approaches); and (ii) the stochastic response of yield and total seasonal irrigation to the choice of an intraseasonal soil moisture target and irrigated area. Irrigated area (the extensive margin decision) and per-area irrigation intensity (the intensive margin decision) are then calculated for different seasonal water restrictions (corresponding to regulatory policies) and well yield constraints on intraseasonal abstraction rates (corresponding to aquifer system limits). Profit- and utility-maximising decisions are determined assuming risk neutrality and varying degrees of risk aversion, respectively. Our results demonstrate that the formulation of the production function has a significant impact on the response to water scarcity. For low well yields, which are the major concern

  11. Global modelling of river water quality under climate change

    Science.gov (United States)

    van Vliet, Michelle T. H.; Franssen, Wietse H. P.; Yearsley, John R.

    2017-04-01

    Climate change will pose challenges on the quality of freshwater resources for human use and ecosystems for instance by changing the dilution capacity and by affecting the rate of chemical processes in rivers. Here we assess the impacts of climate change and induced streamflow changes on a selection of water quality parameters for river basins globally. We used the Variable Infiltration Capacity (VIC) model and a newly developed global water quality module for salinity, temperature, dissolved oxygen and biochemical oxygen demand. The modelling framework was validated using observed records of streamflow, water temperature, chloride, electrical conductivity, dissolved oxygen and biochemical oxygen demand for 1981-2010. VIC and the water quality module were then forced with an ensemble of bias-corrected General Circulation Model (GCM) output for the representative concentration pathways RCP2.6 and RCP8.5 to study water quality trends and identify critical regions (hotspots) of water quality deterioration for the 21st century.

  12. Modeling sintering of multilayers under influence of gravity

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund; Olevsky, Eugene; Tadesse Molla, Tesfaye

    2013-01-01

    , which describes the combined effect of sintering and gravity of thin multilayers, is derived and later compared with experimental results. It allows for consideration of both uniaxial and biaxial stress states. The model is based on the Skorohod-Olevsky viscous sintering framework, the classical...... laminate theory and the elastic-viscoelastic correspondence principle. The modeling approach is then applied to illustrate the effect of gravity during sintering of thin layers of cerium gadolinium oxide (CGO), and it is found to be significant. © 2012 The American Ceramic Society....

  13. On a Stochastic Failure Model under Random Shocks

    Science.gov (United States)

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  14. The monster sporadic group and a theory underlying superstring models

    International Nuclear Information System (INIS)

    Chapline, G.

    1996-09-01

    The pattern of duality symmetries acting on the states of compactified superstring models reinforces an earlier suggestion that the Monster sporadic group is a hidden symmetry for superstring models. This in turn points to a supersymmetric theory of self-dual and anti-self-dual K3 manifolds joined by Dirac strings and evolving in a 13 dimensional spacetime as the fundamental theory. In addition to the usual graviton and dilaton this theory contains matter-like degrees of freedom resembling the massless states of the heterotic string, thus providing a completely geometric interpretation for ordinary matter. 25 refs

  15. Triatominae as a model of morphological plasticity under ecological pressure

    Directory of Open Access Journals (Sweden)

    Dujardin JP

    1999-01-01

    Full Text Available The use of biochemical and genetic characters to explore species or population relationships has been applied to taxonomic questions since the 60s. In responding to the central question of the evolutionary history of Triatominae, i.e. their monophyletic or polyphyletic origin, two important questions arise (i to what extent is the morphologically-based classification valid for assessing phylogenetic relationships? and (ii what are the main mechanisms underlying speciation in Triatominae? Phenetic and genetic studies so far developed suggest that speciation in Triatominae may be a rapid process mainly driven by ecological factors.

  16. Network formation under heterogeneous costs: The multiple group model

    NARCIS (Netherlands)

    Kamphorst, J.J.A.; van der Laan, G.

    2007-01-01

    It is widely recognized that the shape of networks influences both individual and aggregate behavior. This raises the question which types of networks are likely to arise. In this paper we investigate a model of network formation, where players are divided into groups and the costs of a link between

  17. Modified bond model for shear in slabs under concentrated loads

    NARCIS (Netherlands)

    Lantsoght, E.O.L.; Van der Veen, C.; De Boer, A.

    2015-01-01

    Slabs subjected to concentrated loads close to supports, as occurring for truck loads on slab bridges, are less studied than beams in shear or slab-column connections in punching. To predict the shear capacity for this case, the Bond Model for concentric punching shear was studied initially.

  18. The Optimal Portfolio Selection Model under g-Expectation

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    complicated and sophisticated, the optimal solution turns out to be surprisingly simple, the payoff of a portfolio of two binary claims. Also I give the economic meaning of my model and the comparison with that one in the work of Jin and Zhou, 2008.

  19. Modeling sheet-flow sand transport under progressive surface waves

    NARCIS (Netherlands)

    Kranenburg, Wouter

    2013-01-01

    In the near-shore zone, energetic sea waves generate sheet-flow sand transport. In present day coastal models, wave-induced sheet-flow sand transport rates are usually predicted with semi-empirical transport formulas, based on extensive research on this phenomenon in oscillatory flow tunnels.

  20. Outdoor FSO Communications Under Fog: Attenuation Modeling and Performance Evaluation

    KAUST Repository

    Esmail, Maged Abdullah; Fathallah, Habib; Alouini, Mohamed-Slim

    2016-01-01

    transmission technology. Therefore, FSO will have its preferred market segment in future wireless fifth-generation/sixth-generation (5G/6G) networks having cell sizes that are lower than a 1-km diameter. Moreover, the results of our modeling and analysis can

  1. Modeling Growth and Yield of Schizolobium amazonicum under Different Spacings

    Directory of Open Access Journals (Sweden)

    Gilson Fernandes da Silva

    2013-01-01

    Full Text Available This study aimed to present an approach to model the growth and yield of the species Schizolobium amazonicum (Paricá based on a study of different spacings located in Pará, Brazil. Whole-stand models were employed, and two modeling strategies (Strategies A and B were tested. Moreover, the following three scenarios were evaluated to assess the accuracy of the model in estimating total and commercial volumes at five years of age: complete absence of data (S1; available information about the variables basal area, site index, dominant height, and number of trees at two years of age (S2; and this information available at five years of age (S3. The results indicated that the 3 × 2 spacing has a higher mortality rate than normal, and, in general, greater spacing corresponds to larger diameter and average height and smaller basal area and volume per hectare. In estimating the total and commercial volumes for the three scenarios tested, Strategy B seems to be the most appropriate method to estimate the growth and yield of Paricá plantations in the study region, particularly because Strategy A showed a significant bias in its estimates.

  2. Modeling pedestrian gap crossing index under mixed traffic condition.

    Science.gov (United States)

    Naser, Mohamed M; Zulkiple, Adnan; Al Bargi, Walid A; Khalifa, Nasradeen A; Daniel, Basil David

    2017-12-01

    There are a variety of challenges faced by pedestrians when they walk along and attempt to cross a road, as the most recorded accidents occur during this time. Pedestrians of all types, including both sexes with numerous aging groups, are always subjected to risk and are characterized as the most exposed road users. The increased demand for better traffic management strategies to reduce the risks at intersections, improve quality traffic management, traffic volume, and longer cycle time has further increased concerns over the past decade. This paper aims to develop a sustainable pedestrian gap crossing index model based on traffic flow density. It focusses on the gaps accepted by pedestrians and their decision for street crossing, where (Log-Gap) logarithm of accepted gaps was used to optimize the result of a model for gap crossing behavior. Through a review of extant literature, 15 influential variables were extracted for further empirical analysis. Subsequently, data from the observation at an uncontrolled mid-block in Jalan Ampang in Kuala Lumpur, Malaysia was gathered and Multiple Linear Regression (MLR) and Binary Logit Model (BLM) techniques were employed to analyze the results. From the results, different pedestrian behavioral characteristics were considered for a minimum gap size model, out of which only a few (four) variables could explain the pedestrian road crossing behavior while the remaining variables have an insignificant effect. Among the different variables, age, rolling gap, vehicle type, and crossing were the most influential variables. The study concludes that pedestrians' decision to cross the street depends on the pedestrian age, rolling gap, vehicle type, and size of traffic gap before crossing. The inferences from these models will be useful to increase pedestrian safety and performance evaluation of uncontrolled midblock road crossings in developing countries. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  3. Detecting and accounting for violations of the constancy assumption in non-inferiority clinical trials.

    Science.gov (United States)

    Koopmeiners, Joseph S; Hobbs, Brian P

    2018-05-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.

  4. Revealing patterns of cultural transmission from frequency data: equilibrium and non-equilibrium assumptions

    Science.gov (United States)

    Crema, Enrico R.; Kandler, Anne; Shennan, Stephen

    2016-12-01

    A long tradition of cultural evolutionary studies has developed a rich repertoire of mathematical models of social learning. Early studies have laid the foundation of more recent endeavours to infer patterns of cultural transmission from observed frequencies of a variety of cultural data, from decorative motifs on potsherds to baby names and musical preferences. While this wide range of applications provides an opportunity for the development of generalisable analytical workflows, archaeological data present new questions and challenges that require further methodological and theoretical discussion. Here we examine the decorative motifs of Neolithic pottery from an archaeological assemblage in Western Germany, and argue that the widely used (and relatively undiscussed) assumption that observed frequencies are the result of a system in equilibrium conditions is unwarranted, and can lead to incorrect conclusions. We analyse our data with a simulation-based inferential framework that can overcome some of the intrinsic limitations in archaeological data, as well as handle both equilibrium conditions and instances where the mode of cultural transmission is time-variant. Results suggest that none of the models examined can produce the observed pattern under equilibrium conditions, and suggest. instead temporal shifts in the patterns of cultural transmission.

  5. A constitutive model for concrete under dynamic loading

    International Nuclear Information System (INIS)

    Suaris, W.; Shah, S.P.

    1983-01-01

    A continuous damage theory for the quasistatic and dynamic behaviour of concrete is presented. The continuous damage theory is rational choice for use in predicing the dynamic behaviour of concrete as the strain-rate effects that have been observed for concrete can to a large extent be attributed to the rate-sensitivity of the microcracking process. A vectorial representation is adopted for the damage to account for the planar nature of the microcracks in concrete. Damage is treated as an internal state variable influencing the free energy of the material and the constitutive equations and the damage evolution equations are derived consistently using thermodynamic considerations. The developed constitutive model is then calibrated by using test results in flexure and compression over a range of strain-rates. The constitutive model is also shown to be capable of predicting certain other experimentally observed characteristics of the dynamic response of concrete. (orig./HP)

  6. A Novel Computer Virus Propagation Model under Security Classification

    Directory of Open Access Journals (Sweden)

    Qingyi Zhu

    2017-01-01

    Full Text Available In reality, some computers have specific security classification. For the sake of safety and cost, the security level of computers will be upgraded with increasing of threats in networks. Here we assume that there exists a threshold value which determines when countermeasures should be taken to level up the security of a fraction of computers with low security level. And in some specific realistic environments the propagation network can be regarded as fully interconnected. Inspired by these facts, this paper presents a novel computer virus dynamics model considering the impact brought by security classification in full interconnection network. By using the theory of dynamic stability, the existence of equilibria and stability conditions is analysed and proved. And the above optimal threshold value is given analytically. Then, some numerical experiments are made to justify the model. Besides, some discussions and antivirus measures are given.

  7. Creep modeling of textured zircaloy under biaxial stressing

    International Nuclear Information System (INIS)

    Adams, B.L.; Murty, K.L.

    1984-01-01

    Anisotropic biaxial creep behavior of textured Zircaloy tubing was modeled using a crystal-plastic uniform strain-rate upper-bound and a uniform stress lower-bound approach. Power-law steady-state creep is considered to occur on each crystallite glide system by fixing the slip rate to be proportional to the resolved shear stress raised to a power. Prismatic, basal, and pyramidal slip modes were considered. The crystallographic texture is characterized using the orientation distribution function determined from a set of three pole-figures. This method is contrasted with a Von-Mises-Hill phenomenological model in comparison with experimental data obtained at 673 deg K. The resulting creep-dissipative loci show the importance of the basal slip mode on creep in heavily cold-worked cladding, whereas prismatic slip is more important for the recrystallized materials. (author)

  8. Modeling the Virtual Machine Launching Overhead under Fermicloud

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele [Fermilab; Wu, Hao [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Bernabeu, Gerard [Fermilab; Noh, Seo-Young [KISTI, Daejeon

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  9. Maintenance cost models in deregulated power systems under opportunity costs

    International Nuclear Information System (INIS)

    Al-Arfaj, K.; Dahal, K.; Azaiez, M.N.

    2007-01-01

    In a centralized power system, the operator is responsible for scheduling maintenance. There are different types of maintenance, including corrective maintenance; predictive maintenance; preventive maintenance; and reliability-centred maintenance. The main cause of power failures is poor maintenance. As such, maintenance costs play a significant role in deregulated power systems. They include direct costs associated with material and labor costs as well as indirect costs associated with spare parts inventory, shipment, test equipment, indirect labor, opportunity costs and cost of failure. In maintenance scheduling and planning, the cost function is the only component of the objective function. This paper presented the results of a study in which different components of maintenance costs were modeled. The maintenance models were formulated as an optimization problem with single and multiple objectives and a set of constraints. The maintenance costs models could be used to schedule the maintenance activities of power generators more accurately and to identify the best maintenance strategies over a period of time as they consider failure and opportunity costs in a deregulated environment. 32 refs., 4 tabs., 4 figs

  10. Stability of void lattices under irradiation: a kinetic model

    International Nuclear Information System (INIS)

    Benoist, P.; Martin, G.

    1975-01-01

    Voids are imbedded in a homogeneous medium where point defects are uniformly created and annihilated. As shown by a perturbation calculation, the proportion of the defects which are lost on the cavities goes through a maximum, when the voids are arranged on a translation lattice. If a void is displaced from its lattice site, its growth rate becomes anisotropic and is larger in the direction of the vacant site. The relative efficiency of BCC versus FCC void lattices for the capture of point defects is shown to depend on the relaxation length of the point defects in the surrounding medium. It is shown that the rate of energy dissipation in the crystal under irradiation is maximum when the voids are ordered on the appropriate lattice

  11. Stability of void lattices under irradiation: a kinetic model

    International Nuclear Information System (INIS)

    Benoist, P.; Martin, G.

    1975-01-01

    Voids are imbedded in a homogeneous medium where point defects are uniformly created and annihilated. As shown by a perturbation calculation, the proportion of the defects which are lost on the cavities goes through a maximum, when the voids are arranged on a translation lattice. If a void is displaced from its lattice site, its growth the rate becomes anisotropic and is larger in the direction of the vacant site. The relative efficiency of BCC versus FCC void lattices for the capture of point defects is shown to depend on the relaxation length of the point defects in the surrounding medium. It is shown that the rate of energy dissipation in the crystal under irradiation is maximum when the voids are ordered on the appropriate lattice [fr

  12. Development of a kinetic model, including rate constant estimations, on iodine and caesium behaviour in the primary circuit of LWR's under accident conditions

    International Nuclear Information System (INIS)

    Alonso, A.; Buron, J.M.; Fernandez, S.

    1991-07-01

    In this report, a kinetic model has been developed with the aim to try to reproduce the chemical phenomena that take place in a flowing system containing steam, hydrogen and iodine and caesium vapours. The work is divided into two different parts. The first part consists in the estimation, through the Activited Complex Theory, of the reaction rate constants, for the chosen reactions, and the development of the kinetic model based on the concept of ideal tubular chemical reactor. The second part deals with the application of such model to several cases, which were taken from the Phase B 'Scoping Calculations' of the Phebus-FP Project (sequence AB) and the SFD-ST and SFD1.1 experiments. The main conclusion obtained from this work is that the assumption of instantaneous equilibrium could be inacurrate in order to estimate the iodine and caesium species distribution under severe accidents conditions

  13. Proposed optical test of Bell's inequalities not resting upon the fair sampling assumption

    International Nuclear Information System (INIS)

    Santos, Emilio

    2004-01-01

    Arguments are given against the fair sampling assumption, used to claim an empirical disproof of local realism. New tests are proposed, able to discriminate between quantum mechanics and a restricted, but appealing, family of local hidden-variables models. Such tests require detectors with efficiencies just above 20%

  14. Modeling amorphization of tetrahedral structures under local approaches

    International Nuclear Information System (INIS)

    Jesurum, C.E.; Pulim, V.; Berger, B.; Hobbs, L.W.

    1997-01-01

    Many crystalline ceramics can be topologically disordered (amorphized) by disordering radiation events involving high-energy collision cascades or (in some cases) successive single-atom displacements. The authors are interested in both the potential for disorder and the possible aperiodic structures adopted following the disordering event. The potential for disordering is related to connectivity, and among those structures of interest are tetrahedral networks (such as SiO 2 , SiC and Si 3 N 4 ) comprising corner-shared tetrahedral units whose connectivities are easily evaluated. In order to study the response of these networks to radiation, the authors have chosen to model their assembly according to the (simple) local rules that each corner obeys in connecting to another tetrahedron; in this way they easily erect large computer models of any crystalline polymorphic form. Amorphous structures can be similarly grown by application of altered rules. They have adopted a simple model of irradiation in which all bonds in the neighborhood of a designated tetrahedron are destroyed, and they reform the bonds in this region according to a set of (possibly different) local rules appropriate to the environmental conditions. When a tetrahedron approaches the boundary of this neighborhood, it undergoes an optimization step in which a spring is inserted between two corners of compatible tetrahedra when they are within a certain distance of one another; component forces are then applied that act to minimize the distance between these corners and minimize the deviation from the rules. The resulting structure is then analyzed for the complete adjacency matrix, irreducible ring statistics, and bond angle distributions

  15. Bistable dynamics underlying excitability of ion homeostasis in neuron models.

    Directory of Open Access Journals (Sweden)

    Niklas Hübel

    2014-05-01

    Full Text Available When neurons fire action potentials, dissipation of free energy is usually not directly considered, because the change in free energy is often negligible compared to the immense reservoir stored in neural transmembrane ion gradients and the long-term energy requirements are met through chemical energy, i.e., metabolism. However, these gradients can temporarily nearly vanish in neurological diseases, such as migraine and stroke, and in traumatic brain injury from concussions to severe injuries. We study biophysical neuron models based on the Hodgkin-Huxley (HH formalism extended to include time-dependent ion concentrations inside and outside the cell and metabolic energy-driven pumps. We reveal the basic mechanism of a state of free energy-starvation (FES with bifurcation analyses showing that ion dynamics is for a large range of pump rates bistable without contact to an ion bath. This is interpreted as a threshold reduction of a new fundamental mechanism of ionic excitability that causes a long-lasting but transient FES as observed in pathological states. We can in particular conclude that a coupling of extracellular ion concentrations to a large glial-vascular bath can take a role as an inhibitory mechanism crucial in ion homeostasis, while the Na⁺/K⁺ pumps alone are insufficient to recover from FES. Our results provide the missing link between the HH formalism and activator-inhibitor models that have been successfully used for modeling migraine phenotypes, and therefore will allow us to validate the hypothesis that migraine symptoms are explained by disturbed function in ion channel subunits, Na⁺/K⁺ pumps, and other proteins that regulate ion homeostasis.

  16. A thermal model for photovoltaic panels under varying atmospheric conditions

    International Nuclear Information System (INIS)

    Armstrong, S.; Hurley, W.G.

    2010-01-01

    The response of the photovoltaic (PV) panel temperature is dynamic with respect to the changes in the incoming solar radiation. During periods of rapidly changing conditions, a steady state model of the operating temperature cannot be justified because the response time of the PV panel temperature becomes significant due to its large thermal mass. Therefore, it is of interest to determine the thermal response time of the PV panel. Previous attempts to determine the thermal response time have used indoor measurements, controlling the wind flow over the surface of the panel with fans or conducting the experiments in darkness to avoid radiative heat loss effects. In real operating conditions, the effective PV panel temperature is subjected to randomly varying ambient temperature and fluctuating wind speeds and directions; parameters that are not replicated in controlled, indoor experiments. A new thermal model is proposed that incorporates atmospheric conditions; effects of PV panel material composition and mounting structure. Experimental results are presented which verify the thermal behaviour of a photovoltaic panel for low to strong winds.

  17. Financial Transaction Tax: Determination of Economic Impact Under DSGE Model

    Directory of Open Access Journals (Sweden)

    Veronika Solilová

    2015-01-01

    Full Text Available The discussion about the possible taxation of the financial sector has started in the European Union as a result of the financial crisis which has spread to the Europe from the United States in 2008 and consequently of the massive financial interventions by governments made in favour of the financial sector. On 14 February 2013, after rejection of the draft of the directive introducing a common system of financial transaction tax in 2011, the European Commission introduced the financial transaction tax through enhanced cooperation. The aim of the paper is to research economic impact of financial transaction tax on EU (EU27 or EU11 with respect to the DSGE model which was used for the determination of impacts. Based on our analysis the DSGE model can be considered as underestimated in case of the impact on economic growth and an overestimated in case of the revenue collection. Particularly, the overall impact of the financial transaction tax considering cascade effects of securities (tax rate 2.2% and derivatives (tax rate 0.2% is ranged between −4.752 and 1.472 percent points of GDP. And further, is assumed that the relocation effects of business/trade can be in average 40% causes a decline of expected tax revenues in the amount of 13bn EUR. Thus, at a time of fragile economic growth across the EU and the increased risk of recession in Europe, the introduction of the FTT should be undesirable.

  18. Developing Physiologic Models for Emergency Medical Procedures Under Microgravity

    Science.gov (United States)

    Parker, Nigel; O'Quinn, Veronica

    2012-01-01

    Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI s patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.

  19. Modeling of fracture of protective concrete structures under impact loads

    Energy Technology Data Exchange (ETDEWEB)

    Radchenko, P. A., E-mail: radchenko@live.ru; Batuev, S. P.; Radchenko, A. V.; Plevkov, V. S. [Tomsk State University of Architecture and Building, Tomsk, 634003 (Russian Federation)

    2015-10-27

    This paper presents results of numerical simulation of interaction between a Boeing 747-400 aircraft and the protective shell of a nuclear power plant. The shell is presented as a complex multilayered cellular structure consisting of layers of concrete and fiber concrete bonded with steel trusses. Numerical simulation was performed three-dimensionally using the original algorithm and software taking into account algorithms for building grids of complex geometric objects and parallel computations. Dynamics of the stress-strain state and fracture of the structure were studied. Destruction is described using a two-stage model that allows taking into account anisotropy of elastic and strength properties of concrete and fiber concrete. It is shown that wave processes initiate destruction of the cellular shell structure; cells start to destruct in an unloading wave originating after the compression wave arrival at free cell surfaces.

  20. Modeling of fracture of protective concrete structures under impact loads

    Science.gov (United States)

    Radchenko, P. A.; Batuev, S. P.; Radchenko, A. V.; Plevkov, V. S.

    2015-10-01

    This paper presents results of numerical simulation of interaction between a Boeing 747-400 aircraft and the protective shell of a nuclear power plant. The shell is presented as a complex multilayered cellular structure consisting of layers of concrete and fiber concrete bonded with steel trusses. Numerical simulation was performed three-dimensionally using the original algorithm and software taking into account algorithms for building grids of complex geometric objects and parallel computations. Dynamics of the stress-strain state and fracture of the structure were studied. Destruction is described using a two-stage model that allows taking into account anisotropy of elastic and strength properties of concrete and fiber concrete. It is shown that wave processes initiate destruction of the cellular shell structure; cells start to destruct in an unloading wave originating after the compression wave arrival at free cell surfaces.