Contextuality under weak assumptions
International Nuclear Information System (INIS)
Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D
2017-01-01
The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove
Lifelong learning: Foundational models, underlying assumptions and critiques
Regmi, Kapil Dev
2015-04-01
Lifelong learning has become a catchword in almost all countries because of its growing influence on education policies in the globalised world. In the Organisation for Economic Cooperation and Development (OECD) and the European Union (EU), the promotion of lifelong learning has been a strategy to speed up economic growth and become competitive. For UNESCO and the World Bank, lifelong learning has been a novel education model to improve educational policies and programmes in developing countries. In the existing body of literature on the topic, various models of lifelong learning are discussed. After reviewing a number of relevant seminal texts by proponents of a variety of schools, this paper argues that the vast number of approaches are actually built on two foundational models, which the author calls the "human capital model" and the "humanistic model". The former aims to increase productive capacity by encouraging competition, privatisation and human capital formation so as to enhance economic growth. The latter aims to strengthen democracy and social welfare by fostering citizenship education, building social capital and expanding capability.
Testing the habituation assumption underlying models of parasitoid foraging behavior
Abram, Paul K.; Cusumano, Antonino; Abram, Katrina; Colazza, Stefano; Peri, Ezio
2017-01-01
Background. Habituation, a form of non-associative learning, has several well-defined characteristics that apply to a wide range of physiological and behavioral responses in many organisms. In classic patch time allocation models, habituation is considered to be a major mechanistic component of
Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption
Directory of Open Access Journals (Sweden)
Zheping Yan
2014-01-01
Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.
Tran, Van; McCall, Matthew N; McMurray, Helene R; Almudevar, Anthony
2013-01-01
Boolean networks (BoN) are relatively simple and interpretable models of gene regulatory networks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks (GRN). We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN). Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled. We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions. Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles.
Kellen, David; Klauer, Karl Christoph
2014-01-01
A classic discussion in the recognition-memory literature concerns the question of whether recognition judgments are better described by continuous or discrete processes. These two hypotheses are instantiated by the signal detection theory model (SDT) and the 2-high-threshold model, respectively. Their comparison has almost invariably relied on…
Rubin, David C.; Berntsen, Dorthe; Bohni, Malene Klindt
2008-01-01
In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., text rev.; American Psychiatric Association,…
Forecasting Renewable Energy Consumption under Zero Assumptions
Directory of Open Access Journals (Sweden)
Jie Ma
2018-02-01
Full Text Available Renewable energy, as an environmentally friendly and sustainable source of energy, is key to realizing the nationally determined contributions of the United States (US to the December 2015 Paris agreement. Policymakers in the US rely on energy forecasts to draft and implement cost-minimizing, efficient and realistic renewable and sustainable energy policies but the inaccuracies in past projections are considerably high. The inaccuracies and inconsistencies in forecasts are due to the numerous factors considered, massive assumptions and modeling flaws in the underlying model. Here, we propose and apply a machine learning forecasting algorithm devoid of massive independent variables and assumptions to model and forecast renewable energy consumption (REC in the US. We employ the forecasting technique to make projections on REC from biomass (REC-BMs and hydroelectric (HE-EC sources for the 2009–2016 period. We find that, relative to reference case projections in Energy Information Administration’s Annual Energy Outlook 2008, projections based on our proposed technique present an enormous improvement up to ~138.26-fold on REC-BMs and ~24.67-fold on HE-EC; and that applying our technique saves the US ~2692.62PJ petajoules(PJ on HE-EC and ~9695.09PJ on REC-BMs for the 8-year forecast period. The achieved high-accuracy is also replicable to other regions.
Dynamic Group Diffie-Hellman Key Exchange under standard assumptions
International Nuclear Information System (INIS)
Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David
2002-01-01
Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model
Limiting assumptions in molecular modeling: electrostatics.
Marshall, Garland R
2013-02-01
Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.
Bank stress testing under different balance sheet assumptions
Busch, Ramona; Drescher, Christian; Memmel, Christoph
2017-01-01
Using unique supervisory survey data on the impact of a hypothetical interest rate shock on German banks, we analyse price and quantity effects on banks' net interest margin components under different balance sheet assumptions. In the first year, the cross-sectional variation of banks' simulated price effect is nearly eight times as large as the one of the simulated quantity effect. After five years, however, the importance of both effects converges. Large banks adjust their balance sheets mo...
Estimators for longitudinal latent exposure models: examining measurement model assumptions.
Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D
2017-06-15
Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A framework for the organizational assumptions underlying safety culture
International Nuclear Information System (INIS)
Packer, Charles
2002-01-01
The safety culture of the nuclear organization can be addressed at the three levels of culture proposed by Edgar Schein. The industry literature provides a great deal of insight at the artefact and espoused value levels, although as yet it remains somewhat disorganized. There is, however, an overall lack of understanding of the assumption level of safety culture. This paper describes a possible framework for conceptualizing the assumption level, suggesting that safety culture is grounded in unconscious beliefs about the nature of the safety problem, its solution and how to organize to achieve the solution. Using this framework, the organization can begin to uncover the assumptions at play in its normal operation, decisions and events and, if necessary, engage in a process to shift them towards assumptions more supportive of a strong safety culture. (author)
Models for waste life cycle assessment: Review of technical assumptions
DEFF Research Database (Denmark)
Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky
2010-01-01
waste LCA models. This review infers that some of the differences in waste LCA models are inherent to the time they were developed. It is expected that models developed later, benefit from past modelling assumptions and knowledge and issues. Models developed in different countries furthermore rely...
Energy Technology Data Exchange (ETDEWEB)
Mendoza, V.M.; Villanueva, E.E.; Garduno, R.; Adem, J. [Centro de Ciencias de la Atmosfera, Mexico (Mexico)
1995-12-31
General circulation models (GCMs) and energy balance models (EBMs) are the best way to simulate the complex large-scale dynamic and thermodynamic processes in the atmosphere. These models have been used to estimate the global warming due to an increase of atmospheric CO{sub 2}. In Japan Ohta with coworkers has developed a physical model based on the conservation of thermal energy applied to pounded shallow water, to compute the change in the water temperature, using the atmospheric warming and the precipitation due to the increase in the atmospheric CO{sub 2} computed by the GISS-GCM. In this work, a method similar to the Ohta`s one is used for computing the change in ground temperature, soil moisture, evaporation, runoff and dryness index in eleven hydrological zones, using in this case the surface air temperature and precipitation due to CO{sub 2} doubling, computed by the GFDLR30-GCM and the version of the Adem thermodynamic climate model (CTM-EBM), which contains the three feedbacks (cryosphere, clouds and water vapor), and does not include water vapor in the CO{sub 2} atmospheric spectral band (12-19{mu})
Uncertainties in sandy shorelines evolution under the Bruun rule assumption
Directory of Open Access Journals (Sweden)
Gonéri eLe Cozannet
2016-04-01
Full Text Available In the current practice of sandy shoreline change assessments, the local sedimentary budget is evaluated using the sediment balance equation, that is, by summing the contributions of longshore and cross-shore processes. The contribution of future sea-level-rise induced by climate change is usually obtained using the Bruun rule, which assumes that the shoreline retreat is equal to the change of sea-level divided by the slope of the upper shoreface. However, it remains unsure that this approach is appropriate to account for the impacts of future sea-level rise. This is due to the lack of relevant observations to validate the Bruun rule under the expected sea-level rise rates. To address this issue, this article estimates the coastal settings and period of time under which the use of the Bruun rule could be (invalidated, in the case of wave-exposed gently-sloping sandy beaches. Using the sedimentary budgets of Stive (2004 and probabilistic sea-level rise scenarios based on IPCC, we provide shoreline change projections that account for all uncertain hydrosedimentary processes affecting idealized coasts (impacts of sea-level rise, storms and other cross-shore and longshore processes. We evaluate the relative importance of each source of uncertainties in the sediment balance equation using a global sensitivity analysis. For scenario RCP 6.0 and 8.5 and in the absence of coastal defences, the model predicts a perceivable shift toward generalized beach erosion by the middle of the 21st century. In contrast, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. Finally, the contribution of sea-level rise and climate change scenarios to sandy shoreline change projections uncertainties increases with time during the 21st century. Our results have three primary implications for coastal settings similar to those provided described in Stive (2004 : first, the validation of the Bruun rule will not necessarily be
Weak convergence of Jacobian determinants under asymmetric assumptions
Directory of Open Access Journals (Sweden)
Teresa Alberico
2012-05-01
Full Text Available Let $\\Om$ be a bounded open set in $\\R^2$ sufficiently smooth and $f_k=(u_k,v_k$ and $f=(u,v$ mappings belong to the Sobolev space $W^{1,2}(\\Om,\\R^2$. We prove that if the sequence of Jacobians $J_{f_k}$ converges to a measure $\\mu$ in sense of measures andif one allows different assumptions on the two components of $f_k$ and $f$, e.g.$$u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,2}(\\Om \\qquad \\, v_k \\rightharpoonup v \\;\\;\\mbox{weakly in} \\;\\; W^{1,q}(\\Om$$for some $q\\in(1,2$, then\\begin{equation}\\label{0}d\\mu=J_f\\,dz.\\end{equation}Moreover, we show that this result is optimal in the sense that conclusion fails for $q=1$.On the other hand, we prove that \\eqref{0} remains valid also if one considers the case $q=1$, but it is necessary to require that $u_k$ weakly converges to $u$ in a Zygmund-Sobolev space with a slightly higher degree of regularity than $W^{1,2}(\\Om$ and precisely$$ u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,L^2 \\log^\\alpha L}(\\Om$$for some $\\alpha >1$.
Assumptions behind size-based ecosystem models are realistic
DEFF Research Database (Denmark)
Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.
2016-01-01
A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed by Fro...... that there is indeed a constructive role for a wide suite of ecosystem models to evaluate fishing strategies in an ecosystem context...
Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2011-01-01
, propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...
Investigation of assumptions underlying current safety guidelines on EM-induced nerve stimulation
Neufeld, Esra; Vogiatzis Oikonomidis, Ioannis; Iacono, Maria Ida; Angelone, Leonardo M.; Kainz, Wolfgang; Kuster, Niels
2016-06-01
An intricate network of a variety of nerves is embedded within the complex anatomy of the human body. Although nerves are shielded from unwanted excitation, they can still be stimulated by external electromagnetic sources that induce strongly non-uniform field distributions. Current exposure safety standards designed to limit unwanted nerve stimulation are based on a series of explicit and implicit assumptions and simplifications. This paper demonstrates the applicability of functionalized anatomical phantoms with integrated coupled electromagnetic and neuronal dynamics solvers for investigating the impact of magnetic resonance exposure on nerve excitation within the full complexity of the human anatomy. The impact of neuronal dynamics models, temperature and local hot-spots, nerve trajectory and potential smoothing, anatomical inhomogeneity, and pulse duration on nerve stimulation was evaluated. As a result, multiple assumptions underlying current safety standards are questioned. It is demonstrated that coupled EM-neuronal dynamics modeling involving realistic anatomies is valuable to establish conservative safety criteria.
Oil production, oil prices, and macroeconomic adjustment under different wage assumptions
International Nuclear Information System (INIS)
Harvie, C.; Maleka, P.T.
1992-01-01
In a previous paper one of the authors developed a simple model to try to identify the possible macroeconomic adjustment processes arising in an economy experiencing a temporary period of oil production, under alternative wage adjustment assumptions, namely nominal and real wage rigidity. Certain assumptions were made regarding the characteristics of actual production, the permanent revenues generated from that oil production, and the net exports/imports of oil. The role of the price of oil, and possible changes in that price was essentially ignored. Here we attempt to incorporate the price of oil, as well as changes in that price, in conjunction with the production of oil, the objective being to identify the contribution which the price of oil, and changes in it, make to the adjustment process itself. The emphasis in this paper is not given to a mathematical derivation and analysis of the model's dynamics of adjustment or its comparative statics, but rather to the derivation of simulation results from the model, for a specific assumed case, using a numerical algorithm program, conducive to the type of theoretical framework utilized here. The results presented suggest that although the adjustment profiles of the macroeconomic variables of interest, for either wage adjustment assumption, remain fundamentally the same, the magnitude of these adjustments is increased. Hence to derive a more accurate picture of the dimensions of adjustment of these macroeconomic variables, it is essential to include the price of oil as well as changes in that price. (Author)
DEFF Research Database (Denmark)
Skovbjerg, Helle Marie; Bekker, Tilde; Barendregt, Wolmet
2016-01-01
In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....
Cost and Performance Assumptions for Modeling Electricity Generation Technologies
Energy Technology Data Exchange (ETDEWEB)
Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.
2010-11-01
The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.
PKreport: report generation for checking population pharmacokinetic model assumptions
Directory of Open Access Journals (Sweden)
Li Jun
2011-05-01
Full Text Available Abstract Background Graphics play an important and unique role in population pharmacokinetic (PopPK model building by exploring hidden structure among data before modeling, evaluating model fit, and validating results after modeling. Results The work described in this paper is about a new R package called PKreport, which is able to generate a collection of plots and statistics for testing model assumptions, visualizing data and diagnosing models. The metric system is utilized as the currency for communicating between data sets and the package to generate special-purpose plots. It provides ways to match output from diverse software such as NONMEM, Monolix, R nlme package, etc. The package is implemented with S4 class hierarchy, and offers an efficient way to access the output from NONMEM 7. The final reports take advantage of the web browser as user interface to manage and visualize plots. Conclusions PKreport provides 1 a flexible and efficient R class to store and retrieve NONMEM 7 output, 2 automate plots for users to visualize data and models, 3 automatically generated R scripts that are used to create the plots; 4 an archive-oriented management tool for users to store, retrieve and modify figures, 5 high-quality graphs based on the R packages, lattice and ggplot2. The general architecture, running environment and statistical methods can be readily extended with R class hierarchy. PKreport is free to download at http://cran.r-project.org/web/packages/PKreport/index.html.
Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G
2014-11-01
Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.
Hankins, Matthew
2008-10-14
The twelve-item General Health Questionnaire (GHQ-12) was developed to screen for non-specific psychiatric morbidity. It has been widely validated and found to be reliable. These validation studies have assumed that the GHQ-12 is one-dimensional and free of response bias, but recent evidence suggests that neither of these assumptions may be correct, threatening its utility as a screening instrument. Further uncertainty arises because of the multiplicity of scoring methods of the GHQ-12. This study set out to establish the best fitting model for the GHQ-12 for three scoring methods (Likert, GHQ and C-GHQ) and to calculate the degree of measurement error under these more realistic assumptions. GHQ-12 data were obtained from the Health Survey for England 2004 cohort (n = 3705). Structural equation modelling was used to assess the fit of [1] the one-dimensional model [2] the current 'best fit' three-dimensional model and [3] a one-dimensional model with response bias. Three different scoring methods were assessed for each model. The best fitting model was assessed for reliability, standard error of measurement and discrimination. The best fitting model was one-dimensional with response bias on the negatively phrased items, suggesting that previous GHQ-12 factor structures were artifacts of the analysis method. The reliability of this model was over-estimated by Cronbach's Alpha for all scoring methods: 0.90 (Likert method), 0.90 (GHQ method) and 0.75 (C-GHQ). More realistic estimates of reliability were 0.73, 0.87 and 0.53 (C-GHQ), respectively. Discrimination (Delta) also varied according to scoring method: 0.94 (Likert method), 0.63 (GHQ method) and 0.97 (C-GHQ method). Conventional psychometric assessments using factor analysis and reliability estimates have obscured substantial measurement error in the GHQ-12 due to response bias on the negative items, which limits its utility as a screening instrument for psychiatric morbidity.
Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.
2017-12-01
Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.
Directory of Open Access Journals (Sweden)
Giordano James
2010-01-01
Full Text Available Abstract A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order, and what counts as abnormality (i.e.- disorder. The distinction(s between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice.
2010-01-01
A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. PMID:20109176
Uncovering Implicit Assumptions: a Large-Scale Study on Students' Mental Models of Diffusion
Stains, Marilyne; Sevian, Hannah
2015-12-01
Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in the gaseous medium. A large sample of data ( N = 423) from students across grade 8 (age 13) through upper-level undergraduate was subjected to a cluster analysis to determine the main mental models present. The cluster analysis resulted in a reduced data set ( N = 308), and then, mental models were ascertained from robust clusters. The mental models that emerged from analysis were triangulated through interview data and characterised according to underlying implicit assumptions that guide and constrain thinking about diffusion of a solute in a gaseous medium. Impacts of students' level of preparation in science and relationships of mental models to science disciplines studied by students were examined. Implications are discussed for the value of this approach to identify typical mental models and the sets of implicit assumptions that constrain them.
Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D
2016-01-01
Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.
CSIR Research Space (South Africa)
Casini, G
2012-10-01
Full Text Available ). In this paper we propose two simple procedures to assist modelers with integrating these assumptions into their models, thereby allowing for a more complete translation into DLs....
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
A computational model to investigate assumptions in the headturn preference procedure
Directory of Open Access Journals (Sweden)
Christina eBergmann
2013-10-01
Full Text Available In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP: (1 behavioural differences originate in different processing; (2 processing involves some form of recognition; (3 words are segmented from connected speech; and (4 differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a the specific voices used in the two parts on HPP experiments (familiarisation and test and (b the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximise cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviours observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP.
Korostil, Igor A; Peters, Gareth W; Law, Matthew G; Regan, David G
2013-04-08
Deterministic dynamic compartmental transmission models (DDCTMs) of human papillomavirus (HPV) transmission have been used in a number of studies to estimate the potential impact of HPV vaccination programs. In most cases, the models were built under the assumption that an individual who cleared HPV infection develops (life-long) natural immunity against re-infection with the same HPV type (this is known as SIR scenario). This assumption was also made by two Australian modelling studies evaluating the impact of the National HPV Vaccination Program to assist in the health-economic assessment of male vaccination. An alternative view denying natural immunity after clearance (SIS scenario) was only presented in one study, although neither scenario has been supported by strong evidence. Some recent findings, however, provide arguments in favour of SIS. We developed HPV transmission models implementing life-time (SIR), limited, and non-existent (SIS) natural immunity. For each model we estimated the herd immunity effect of the ongoing Australian HPV vaccination program and its extension to cover males. Given the Australian setting, we aimed to clarify the extent to which the choice of model structure would influence estimation of this effect. A statistically robust and efficient calibration methodology was applied to ensure credibility of our results. We observed that for non-SIR models the herd immunity effect measured in relative reductions in HPV prevalence in the unvaccinated population was much more pronounced than for the SIR model. For example, with vaccine efficacy of 95% for females and 90% for males, the reductions for HPV-16 were 3% in females and 28% in males for the SIR model, and at least 30% (females) and 60% (males) for non-SIR models. The magnitude of these differences implies that evaluations of the impact of vaccination programs using DDCTMs should incorporate several model structures until our understanding of natural immunity is improved. Copyright
Van Laar, Margriet; Van Der Pol, Peggy; Niesink, Raymond
2016-08-01
The Netherlands has seen an increase in Δ9-tetrahydrocannabinol (THC) concentrations from approximately 8% in the 1990s up to 20% in 2004. Increased cannabis potency may lead to higher THC-exposure and cannabis related harm. The Dutch government officially condones the sale of cannabis from so called 'coffee shops', and the Opium Act distinguishes cannabis as a Schedule II drug with 'acceptable risk' from other drugs with 'unacceptable risk' (Schedule I). Even in 1976, however, cannabis potency was taken into account by distinguishing hemp oil as a Schedule I drug. In 2011, an advisory committee recommended tightening up legislation, leading to a 2013 bill proposing the reclassification of high potency cannabis products with a THC content of 15% or more as a Schedule I drug. The purpose of this measure was twofold: to reduce public health risks and to reduce illegal cultivation and export of cannabis by increasing punishment. This paper focuses on the public health aspects and describes the (explicit and implicit) assumptions underlying this '15% THC measure', as well as to what extent these are supported by scientific research. Based on scientific literature and other sources of information, we conclude that the 15% measure can provide in theory a slight health benefit for specific groups of cannabis users (i.e., frequent users preferring strong cannabis, purchasing from coffee shops, using 'steady quantities' and not changing their smoking behaviour), but certainly not for all cannabis users. These gains should be weighed against the investment in enforcement and the risk of unintended (adverse) effects. Given the many assumptions and uncertainty about the nature and extent of the expected buying and smoking behaviour changes, the measure is a political choice and based on thin evidence. Copyright © 2016 Springer. Published by Elsevier B.V. All rights reserved.
Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn
2016-11-01
It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up. © 2016 The British Psychological Society.
Yousefian, Pedram; Tiryakioğlu, Murat
2018-02-01
An in-depth discussion of pore formation is presented in this paper by first reinterpreting in situ observations reported in the literature as well as assumptions commonly made to model pore formation in aluminum castings. The physics of pore formation is reviewed through theoretical fracture pressure calculations based on classical nucleation theory for homogeneous and heterogeneous nucleation, with and without dissolved gas, i.e., hydrogen. Based on the fracture pressure for aluminum, critical pore size and the corresponding probability of vacancies clustering to form that size have been calculated using thermodynamic data reported in the literature. Calculations show that it is impossible for a pore to nucleate either homogeneously or heterogeneously in aluminum, even with dissolved hydrogen. The formation of pores in aluminum castings can only be explained by inflation of entrained surface oxide films (bifilms) under reduced pressure and/or with dissolved gas, which involves only growth, avoiding any nucleation problem. This mechanism is consistent with the reinterpretations of in situ observations as well as the assumptions made in the literature to model pore formation.
Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies
Energy Technology Data Exchange (ETDEWEB)
Stoll, Brady [National Renewable Energy Lab. (NREL), Golden, CO (United States); Brinkman, Gregory [National Renewable Energy Lab. (NREL), Golden, CO (United States); Townsend, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bloom, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-01-01
Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0
Lifelong Learning: Foundational Models, Underlying Assumptions and Critiques
Regmi, Kapil Dev
2015-01-01
Lifelong learning has become a catchword in almost all countries because of its growing influence on education policies in the globalised world. In the Organisation for Economic Cooperation and Development (OECD) and the European Union (EU), the promotion of lifelong learning has been a strategy to speed up economic growth and become competitive.…
Directory of Open Access Journals (Sweden)
Isabelle Ellis
2009-12-01
Full Text Available This article reviews the assumptions that underpin thecommonly implemented Chronic Disease Self-Managementmodels. Namely that there are a clear set of instructions forpatients to comply with, that all health care providers agreewith; and that the health care provider and the patient agreewith the chronic disease self-management plan that wasdeveloped as part of a consultation. These assumptions areevaluated for their validity in the remote health care context,particularly for Aboriginal people. These assumptions havebeen found to lack validity in this context, therefore analternative model to enhance chronic disease care isproposed.
Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel
2017-10-01
The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.
Investigating assumptions of crown archetypes for modelling LiDAR returns
Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.
2013-01-01
LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid
Testing the normality assumption in the sample selection model with an application to travel demand
van der Klaauw, B.; Koning, R.H.
2003-01-01
In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.
Testing the normality assumption in the sample selection model with an application to travel demand
van der Klauw, B.; Koning, R.H.
In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.
International Nuclear Information System (INIS)
Akber, R.
1996-01-01
Full text: A review of the literature relating to exposure to, and exposure limits for, ionising radiation, inorganic lead, asbestos and noise was undertaken. The four hazards were chosen because they were insidious and ubiquitous, were potential hazards in both occupational and environmental settings and had early and late effects depending on dose and dose rate. For all four hazards, the effect of the hazard was enhanced by other exposures such as smoking or organic solvents. In the cases of inorganic lead and noise, there were documented health effects which affected a significant percentage of the exposed populations at or below the [effective] exposure limits. This was not the case for ionising radiation and asbestos. None of the exposure limits considered exposure to multiple mutagens/carcinogens in the calculation of risk. Ionising radiation was the only one of the hazards to have a model of all likely exposures, occupational, environmental and medical, as the basis for the exposure limits. The other three considered occupational exposure in isolation from environmental exposure. Inorganic lead and noise had economic considerations underlying the exposure limits and the exposure limits for asbestos were based on the current limit of detection. All four hazards had many variables associated with exposure, including idiosyncratic factors, that made modelling the risk very complex. The scientific idea of a time weighted average based on an eight hour day, and forty hour week on which the exposure limits for lead, asbestos and noise were based was underpinned by neither empirical evidence or scientific hypothesis. The methodology of the ACGIH in the setting of limits later brought into law, may have been unduly influenced by the industries most closely affected by those limits. Measuring exposure over part of an eight hour day and extrapolating to model exposure over the longer term is not the most effective way to model exposure. The statistical techniques used
International Nuclear Information System (INIS)
Kraicheva, Z.T.; Tutukov, A.V.; Yungel'son, L.R.
1986-01-01
A simple method is proposed for describing the evolution of semidetached close binaries whose secondary components have degenerated helium cores and lose orbital angular momentum by a magnetic stellar wind. The results of calculations are used to estimate the initial parameters of a series of low-mass (M 1 + M 2 ≤ 5M.) systems of Algol type under the two assumptions of conservative and nonconservative evolution with respect to the orbital angular momentum. Only the assumption that the systems with secondary components possessing convective shells lose angular momentum makes it possible to reproduce their initial parameters without contradiction
A computational model to investigate assumptions in the headturn preference procedure
Bergmann, C.; Bosch, L.F.M. ten; Fikkert, J.P.M.; Boves, L.W.J.
2013-01-01
In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2)
A method for the analysis of assumptions in model-based environmental assessments
Kloprogge, P.; van der Sluijs, J.P.; Petersen, A.C.
2011-01-01
make many assumptions. This inevitably involves – to some degree – subjective judgements by the analysts. Although the potential value-ladenness of model-based assessments has been extensively problematized in literature, this has not so far led to a systematic strategy for analyzing this
Chang, Feng-Hsun; Lawrence, Justin E; Rios-Touma, Blanca; Resh, Vincent H
2014-04-01
Tolerance values (TVs) based on benthic macroinvertebrates are one of the most widely used tools for monitoring the biological impacts of water pollution, particularly in streams and rivers. We compiled TVs of benthic macroinvertebrates from 29 regions around the world to test 11 basic assumptions about pollution tolerance, that: (1) Arthropoda are macroinvertebrates macroinvertebrate taxa < Isopoda + Gastropoda + Hirudinea; (6) Ephemeroptera + Plecoptera + Trichoptera (EPT) < Odonata + Coleoptera + Heteroptera (OCH); (7) EPT < non-EPT insects; (8) Diptera < Insecta; (9) Bivalvia < Gastropoda; (10) Baetidae < other Ephemeroptera; and (11) Hydropsychidae < other Trichoptera. We found that the first eight of these 11 assumptions were supported despite regional variability. In addition, we examined the effect of Best Professional Judgment (BPJ) and non-independence of TVs among countries by performing all analyses using subsets of the original dataset. These subsets included a group based on those systems using TVs that were derived from techniques other than BPJ, and groups based on methods used for TV assignment. The results obtained from these subsets and the entire dataset are similar. We also made seven a priori hypotheses about the regional similarity of TVs based on geography. Only one of these was supported. Development of TVs and the reporting of how they are assigned need to be more rigorous and be better described.
Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...
Cloud-turbulence interactions: Sensitivity of a general circulation model to closure assumptions
International Nuclear Information System (INIS)
Brinkop, S.; Roeckner, E.
1993-01-01
Several approaches to parameterize the turbulent transport of momentum, heat, water vapour and cloud water for use in a general circulation model (GCM) have been tested in one-dimensional and three-dimensional model simulations. The schemes differ with respect to their closure assumptions (conventional eddy diffusivity model versus turbulent kinetic energy closure) and also regarding their treatment of cloud-turbulence interactions. The basis properties of these parameterizations are discussed first in column simulations of a stratocumulus-topped atmospheric boundary layer (ABL) under a strong subsidence inversion during the KONTROL experiment in the North Sea. It is found that the K-models tend to decouple the cloud layer from the adjacent layers because the turbulent activity is calculated from local variables. The higher-order scheme performs better in this respect because internally generated turbulence can be transported up and down through the action of turbulent diffusion. Thus, the TKE-scheme provides not only a better link between the cloud and the sub-cloud layer but also between the cloud and the inversion as a result of cloud-top entrainment. In the stratocumulus case study, where the cloud is confined by a pronounced subsidence inversion, increased entrainment favours cloud dilution through enhanced evaporation of cloud droplets. In the GCM study, however, additional cloud-top entrainment supports cloud formation because indirect cloud generating processes are promoted through efficient ventilation of the ABL, such as the enhanced moisture supply by surface evaporation and the increased depth of the ABL. As a result, tropical convection is more vigorous, the hydrological cycle is intensified, the whole troposphere becomes warmer and moister in general and the cloudiness in the upper part of the ABL is increased. (orig.)
Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions
Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.
2015-01-01
Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.
Bootstrapping realized volatility and realized beta under a local Gaussianity assumption
DEFF Research Database (Denmark)
Hounyo, Ulrich
The main contribution of this paper is to propose a new bootstrap method for statistics based on high frequency returns. The new method exploits the local Gaussianity and the local constancy of volatility of high frequency returns, two assumptions that can simplify inference in the high frequency...... context, as recently explained by Mykland and Zhang (2009). Our main contributions are as follows. First, we show that the local Gaussian bootstrap is firstorder consistent when used to estimate the distributions of realized volatility and ealized betas. Second, we show that the local Gaussian bootstrap...... matches accurately the first four cumulants of realized volatility, implying that this method provides third-order refinements. This is in contrast with the wild bootstrap of Gonçalves and Meddahi (2009), which is only second-order correct. Third, we show that the local Gaussian bootstrap is able...
Zheng, Shimin; Rao, Uma; Bartolucci, Alfred A.; Singh, Karan P.
2011-01-01
Bartolucci et al.(2003) extended the distribution assumption from the normal (Lyles et al., 2000) to the elliptical contoured distribution (ECD) for random regression models used in analysis of longitudinal data accounting for both undetectable values and informative drop-outs. In this paper, the random regression models are constructed on the multivariate skew ECD. A real data set is used to illustrate that the skew ECDs can fit some unimodal continuous data better than the Gaussian distributions or more general continuous symmetric distributions when the symmetric distribution assumption is violated. Also, a simulation study is done for illustrating the model fitness from a variety of skew ECDs. The software we used is SAS/STAT, V. 9.13. PMID:21637734
Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf
2012-01-01
This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant was compared for a series of model assumptions. Three different model approaches describing BNR are considered. In the reference case, the original model implementations are used to simulate WWTP1 (ASM1 & 3) and WWTP2 (ASM2d). The second set of models includes a reactive settler, which extends the description of the non-reactive TSS sedimentation and transport in the reference case with the full set of ASM processes. Finally, the third set of models is based on including electron acceptor dependency of biomass decay rates for ASM1 (WWTP1) and ASM2d (WWTP2). The results show that incorporation of a reactive settler: (1) increases the hydrolysis of particulates; (2) increases the overall plant's denitrification efficiency by reducing the S(NOx) concentration at the bottom of the clarifier; (3) increases the oxidation of COD compounds; (4) increases X(OHO) and X(ANO) decay; and, finally, (5) increases the growth of X(PAO) and formation of X(PHA,Stor) for ASM2d, which has a major impact on the whole P removal system. Introduction of electron acceptor dependent decay leads to a substantial increase of the concentration of X(ANO), X(OHO) and X(PAO) in the bottom of the clarifier. The paper ends with a critical discussion of the influence of the different model assumptions, and emphasizes the need for a model user to understand the significant differences in simulation results that are obtained when applying different combinations of 'standard' models.
Servant, Mathieu; White, Corey; Montagnini, Anna; Burle, Borís
2015-07-15
Most decisions that we make build upon multiple streams of sensory evidence and control mechanisms are needed to filter out irrelevant information. Sequential sampling models of perceptual decision making have recently been enriched by attentional mechanisms that weight sensory evidence in a dynamic and goal-directed way. However, the framework retains the longstanding hypothesis that motor activity is engaged only once a decision threshold is reached. To probe latent assumptions of these models, neurophysiological indices are needed. Therefore, we collected behavioral and EMG data in the flanker task, a standard paradigm to investigate decisions about relevance. Although the models captured response time distributions and accuracy data, EMG analyses of response agonist muscles challenged the assumption of independence between decision and motor processes. Those analyses revealed covert incorrect EMG activity ("partial error") in a fraction of trials in which the correct response was finally given, providing intermediate states of evidence accumulation and response activation at the single-trial level. We extended the models by allowing motor activity to occur before a commitment to a choice and demonstrated that the proposed framework captured the rate, latency, and EMG surface of partial errors, along with the speed of the correction process. In return, EMG data provided strong constraints to discriminate between competing models that made similar behavioral predictions. Our study opens new theoretical and methodological avenues for understanding the links among decision making, cognitive control, and motor execution in humans. Sequential sampling models of perceptual decision making assume that sensory information is accumulated until a criterion quantity of evidence is obtained, from where the decision terminates in a choice and motor activity is engaged. The very existence of covert incorrect EMG activity ("partial error") during the evidence accumulation
Sensitivity of Population Size Estimation for Violating Parametric Assumptions in Log-linear Models
Directory of Open Access Journals (Sweden)
Gerritse Susanna C.
2015-09-01
Full Text Available An important quality aspect of censuses is the degree of coverage of the population. When administrative registers are available undercoverage can be estimated via capture-recapture methodology. The standard approach uses the log-linear model that relies on the assumption that being in the first register is independent of being in the second register. In models using covariates, this assumption of independence is relaxed into independence conditional on covariates. In this article we describe, in a general setting, how sensitivity analyses can be carried out to assess the robustness of the population size estimate. We make use of log-linear Poisson regression using an offset, to simulate departure from the model. This approach can be extended to the case where we have covariates observed in both registers, and to a model with covariates observed in only one register. The robustness of the population size estimate is a function of implied coverage: as implied coverage is low the robustness is low. We conclude that it is important for researchers to investigate and report the estimated robustness of their population size estimate for quality reasons. Extensions are made to log-linear modeling in case of more than two registers and the multiplier method
Vehicle Modeling for use in the CAFE model: Process description and modeling assumptions
Energy Technology Data Exchange (ETDEWEB)
Moawad, Ayman [Argonne National Lab. (ANL), Argonne, IL (United States); Kim, Namdoo [Argonne National Lab. (ANL), Argonne, IL (United States); Rousseau, Aymeric [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-06-01
The objective of this project is to develop and demonstrate a process that, at a minimum, provides more robust information that can be used to calibrate inputs applicable under the CAFE model’s existing structure. The project will be more fully successful if a process can be developed that minimizes the need for decision trees and replaces the synergy factors by inputs provided directly from a vehicle simulation tool. The report provides a description of the process that was developed by Argonne National Laboratory and implemented in Autonomie.
Scott, Nick; Hellard, Margaret; McBryde, Emma Sue
2016-01-01
The discovery of highly effective hepatitis C virus (HCV) treatments has led to discussion of elimination and intensified interest in models of HCV transmission. In developed settings, HCV disproportionally affects people who inject drugs (PWID), and models are typically used to provide an evidence base for the effectiveness of interventions such as needle and syringe programs, opioid substitution therapy and more recently treating PWID with new generation therapies to achieve specified reductions in prevalence and / or incidence. This manuscript reviews deterministic compartmental S-I, deterministic compartmental S-I-S and network-based transmission models of HCV among PWID. We detail typical assumptions made when modeling injecting risk behavior, virus transmission, treatment and re-infection and how they correspond with available evidence and empirical data.
Do unreal assumptions pervert behaviour?
DEFF Research Database (Denmark)
Petersen, Verner C.
After conducting a series of experiments involving economics students Miller concludes: "The experience of taking a course in microeconomics actually altered students' conceptions of the appropriateness of acting in a self-interested manner, not merely their definition of self-interest." Being...... become taken for granted and tacitly included into theories and models of management. Guiding business and manage¬ment to behave in a fashion that apparently makes these assumptions become "true". Thus in fact making theories and models become self-fulfilling prophecies. The paper elucidates some...... of the basic assumptions underlying the theories found in economics. Assumptions relating to the primacy of self-interest, to resourceful, evaluative, maximising models of man, to incentive systems and to agency theory. The major part of the paper then discusses how these assumptions and theories may pervert...
NONLINEAR MODELS FOR DESCRIPTION OF CACAO FRUIT GROWTH WITH ASSUMPTION VIOLATIONS
Directory of Open Access Journals (Sweden)
JOEL AUGUSTO MUNIZ
2017-01-01
Full Text Available Cacao (Theobroma cacao L. is an important fruit in the Brazilian economy, which is mainly cultivated in the southern State of Bahia. The optimal stage for harvesting is a major factor for fruit quality and the knowledge on its growth curves can help, especially in identifying the ideal maturation stage for harvesting. Nonlinear regression models have been widely used for description of growth curves. However, several studies in this subject do not consider the residual analysis, the existence of a possible dependence between longitudinal observations, or the sample variance heterogeneity, compromising the modeling quality. The objective of this work was to compare the fit of nonlinear regression models, considering residual analysis and assumption violations, in the description of the cacao (clone Sial-105 fruit growth. The data evaluated were extracted from Brito and Silva (1983, who conducted the experiment in the Cacao Research Center, Ilheus, State of Bahia. The variables fruit length, diameter and volume as a function of fruit age were studied. The use of weighting and incorporation of residual dependencies was efficient, since the modeling became more consistent, improving the model fit. Considering the first-order autoregressive structure, when needed, leads to significant reduction in the residual standard deviation, making the estimates more reliable. The Logistic model was the most efficient for the description of the cacao fruit growth.
Directory of Open Access Journals (Sweden)
Emrah Altun
2018-01-01
Full Text Available Most of the financial institutions compute the Value-at-Risk (VaR of their trading portfolios using historical simulation-based methods. In this paper, we examine the Filtered Historical Simulation (FHS model introduced by Barone-Adesi et al. (1999 theoretically and empirically. The main goal of this study is to find an answer for the following question: “Does the assumption on innovation process play an important role for the Filtered Historical Simulation model?”. For this goal, we investigate the performance of FHS model with skewed and fat-tailed innovations distributions such as normal, skew normal, Student’s-t, skew-T, generalized error, and skewed generalized error distributions. The performances of FHS models are evaluated by means of unconditional and conditional likelihood ratio tests and loss functions. Based on the empirical results, we conclude that the FHS models with generalized error and skew-T distributions produce more accurate VaR forecasts.
Directory of Open Access Journals (Sweden)
Gabriela Melano
2000-11-01
Full Text Available Educational systems around the world, and specifically in the United States, have long been awaiting for genuine reform efforts. Technology is often perceived as a panacea, if not as a crucial instrument in any educational reform effort. In a conversation with one of his students, Doctor Johnson discusses how the underlying assumptions embedded in our current schooling practices need to be seriously reviewed before any technology strategy is considered. New understandings, as opposed to mere information, is what schools need to reach in order to transform themselves. Finally, Dr. Johnson provides two brief examples, one in the United States and another in México, were hermeneutical approaches have been used for educational reform endeavors.
Directory of Open Access Journals (Sweden)
R. Ots
2018-04-01
Full Text Available Evidence is accumulating that emissions of primary particulate matter (PM from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012, as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source. The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist – all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than
Ots, Riinu; Heal, Mathew R.; Young, Dominique E.; Williams, Leah R.; Allan, James D.; Nemitz, Eiko; Di Marco, Chiara; Detournay, Anais; Xu, Lu; Ng, Nga L.; Coe, Hugh; Herndon, Scott C.; Mackenzie, Ian A.; Green, David C.; Kuenen, Jeroen J. P.; Reis, Stefan; Vieno, Massimo
2018-04-01
Evidence is accumulating that emissions of primary particulate matter (PM) from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal) burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012), as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source). The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA) component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist - all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC) concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than inventory
Spatial modelling of assumption of tourism development with geographic IT using
Directory of Open Access Journals (Sweden)
Jitka Machalová
2010-01-01
Full Text Available The aim of this article is to show the possibilities of spatial modelling and analysing of assumptions of tourism development in the Czech Republic with the objective to make decision-making processes in tourism easier and more efficient (for companies, clients as well as destination managements. The development and placement of tourism depend on the factors (conditions that influence its application in specific areas. These factors are usually divided into three groups: selective, localization and realization. Tourism is inseparably connected with space – countryside. The countryside can be modelled and consecutively analysed by the means of geographical information technologies. With the help of spatial modelling and following analyses the localization and realization conditions in the regions of the Czech Republic have been evaluated. The best localization conditions have been found in the Liberecký region. The capital city of Prague has negligible natural conditions; however, those social ones are on a high level. Next, the spatial analyses have shown that the best realization conditions are provided by the capital city of Prague. Then the Central-Bohemian, South-Moravian, Moravian-Silesian and Karlovarský regions follow. The development of tourism destination is depended not only on the localization and realization factors but it is basically affected by the level of local destination management. Spatial modelling can help destination managers in decision-making processes in order to optimal use of destination potential and efficient targeting their marketing activities.
Dunlap, Lucas
2016-11-01
I argue that Deutsch's model for the behavior of systems traveling around closed timelike curves (CTCs) relies implicitly on a substantive metaphysical assumption. Deutsch is employing a version of quantum theory with a significantly supplemented ontology of parallel existent worlds, which differ in kind from the many worlds of the Everett interpretation. Standard Everett does not support the existence of multiple identical copies of the world, which the D-CTC model requires. This has been obscured because he often refers to the branching structure of Everett as a "multiverse", and describes quantum interference by reference to parallel interacting definite worlds. But he admits that this is only an approximation to Everett. The D-CTC model, however, relies crucially on the existence of a multiverse of parallel interacting worlds. Since his model is supplemented by structures that go significantly beyond quantum theory, and play an ineliminable role in its predictions and explanations, it does not represent a quantum solution to the paradoxes of time travel.
The biosphere at Laxemar. Data, assumptions and models used in the SR-Can assessment
International Nuclear Information System (INIS)
Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern
2006-10-01
This is essentially a compilation of a variety of reports concerning the site investigations, the research activities and information derived from other sources important for the safety assessment. The main objective is to present prerequisites, methods and data used, in the biosphere modelling for the safety assessment SR-Can at the Laxemar site. A major part of the report focuses on how site-specific data are used, recalculated or modified in order to be applicable in the safety assessment context; and the methods and sub-models that are the basis for the biosphere modelling. Furthermore, the assumptions made as to the future states of surface ecosystems are mainly presented in this report. A similar report is provided for the Forsmark area. This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. A central tool in the work is the description of the topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary in collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis
The biosphere at Forsmark. Data, assumptions and models used in the SR-Can assessment
International Nuclear Information System (INIS)
Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern
2006-10-01
This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. The parameters are topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary, e.g. collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis. The report presents descriptions and estimates not presented elsewhere, as well as summaries of important steps in the biosphere modelling that are presented in more detail in separate reports. The intention is to give the reader a coherent description of the steps taken to calculate doses to biota and humans, including a description of the data used, the rationale for a number of assumptions made during parameterisation, and of how the landscape context is applied in the modelling, and also to present the models used and the results obtained
The biosphere at Laxemar. Data, assumptions and models used in the SR-Can assessment
Energy Technology Data Exchange (ETDEWEB)
Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern (eds.)
2006-10-15
This is essentially a compilation of a variety of reports concerning the site investigations, the research activities and information derived from other sources important for the safety assessment. The main objective is to present prerequisites, methods and data used, in the biosphere modelling for the safety assessment SR-Can at the Laxemar site. A major part of the report focuses on how site-specific data are used, recalculated or modified in order to be applicable in the safety assessment context; and the methods and sub-models that are the basis for the biosphere modelling. Furthermore, the assumptions made as to the future states of surface ecosystems are mainly presented in this report. A similar report is provided for the Forsmark area. This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. A central tool in the work is the description of the topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary in collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis.
The biosphere at Forsmark. Data, assumptions and models used in the SR-Can assessment
Energy Technology Data Exchange (ETDEWEB)
Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern (eds.)
2006-10-15
This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. The parameters are topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary, e.g. collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis. The report presents descriptions and estimates not presented elsewhere, as well as summaries of important steps in the biosphere modelling that are presented in more detail in separate reports. The intention is to give the reader a coherent description of the steps taken to calculate doses to biota and humans, including a description of the data used, the rationale for a number of assumptions made during parameterisation, and of how the landscape context is applied in the modelling, and also to present the models used and the results obtained.
Assumptions to the model of managing knowledge workers in modern organizations
Directory of Open Access Journals (Sweden)
Igielski Michał
2017-05-01
Full Text Available Changes in the twenty-first century are faster, suddenly appear, not always desirable for the smooth functioning of the company. This is the domain of globalization, in which new events - opportunities or threats, forcing the company all the time to act. More and more things depend on the intangible assets of the undertaking, its strategic potential. Certain types of work require more knowledge, experience and independent thinking, and custom than others. Therefore in this article the author has taken up the subject of knowledge workers in contemporary organizations. The aim of the study is to attempt to create assumptions about the knowledge management model in these organizations, based on literature analysis and empirical research. In this regard, the author describes the contemporary conditions of employee management and the skills and competences of knowledge workers. In addition, he conducted research (2016 in 100 medium enterprises in the province of Pomerania, using a tool in the form of a questionnaire and an interview. Already at the beginning of the analysis of the data collected, it turned out that for all employers it should be important to discern differences in the creation of a new category of managers who have knowledge useful for the functioning of the company. Moreover, with the experience gained in a similar research process previously carried out in companies from the Baltic Sea Region, the author knew about the positive influence of these people on creating new solutions or improving the quality of already existing products or services.
TENVERGERT, E; GILLESPIE, M; KINGMA, J
This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total
Fitzpatrick, M. Ryleigh; Silva Martins-Filho, Walter; Griffith, Caitlin Ann; Pearson, Kyle; Zellem, Robert Thomas; AzGOE
2016-10-01
The analysis of ground-based photometric observations of planetary transits must treat the effects of the Earth's atmosphere, which exceed the signal of the extrasolar planet. Generally, this is achieved by dividing the signal of the host star and planet from that of nearby field stars to reveal the lightcurve. The lightcurve is then fit to a model of the planet's orbit and physical characteristics, also taking into account the characteristics of the star. The fit to the in and out-of-transit data establish the depth of the lightcurve. The question arises, what is the best way to select and treat reference stars to best characterize and remove the shared atmospheric systematics that plague our transit signal. To explore these questions we examine the effects of several assumptions that underline the calculation of the light curve depth. Our study involves repeated photometric observations of hot Jupiter primary transits in the U and B filters. Data were taken with the University of Arizona's Kuiper 1.55m telescope/Mont4K CCD. Each exoplanet observed offers a unique field with stars of various brightness, spectral types and angular distance from the host star. While these observations are part of a larger study of the Rayleigh scattering signature of hot Jupiter exoplanets, here we study the effects of various choices during reduction, specifically the treatment of reference stars and atmospheric systematics.We calculate the lightcurve for all permutations of reference stars, considering several out-of-transit assumptions (e.g. linear, quadratic or exponential). We assess the sensitivity of the transit depths based on the spread of values. In addition we look for characteristics that minimize the scatter in the reduced lightcurve and analyze the effects of the treatment of individual variables on the resultant lightcurve model. Here we present the results of an in depth statistical analysis that classifies the effect of various parameters and choices involved in
Directory of Open Access Journals (Sweden)
L. Meng
2012-07-01
Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH_{4} yr^{−1} (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH_{4} yr^{−1}. Tropical wetlands contributed 201 Tg CH_{4} yr^{−1}, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH_{4} yr^{−1}. However, sensitivity studies show a large range (150–346 Tg CH_{4} yr^{−1} in predicted global methane emissions (excluding emissions from rice paddies. The large range is
International Nuclear Information System (INIS)
Jakob, A.
2004-07-01
In this report a comprehensive overview on the matrix diffusion of solutes in fractured crystalline rocks is presented. Some examples from observations in crystalline bedrock are used to illustrate that matrix diffusion indeed acts on various length scales. Fickian diffusion is discussed in detail followed by some considerations on rock porosity. Due to the fact that the dual-porosity medium model is a very common and versatile method for describing solute transport in fractured porous media, the transport equations and the fundamental assumptions, approximations and simplifications are discussed in detail. There is a variety of geometrical aspects, processes and events which could influence matrix diffusion. The most important of these, such as, e.g., the effect of the flow-wetted fracture surface, channelling and the limited extent of the porous rock for matrix diffusion etc., are addressed. In a further section open issues and unresolved problems related to matrix diffusion are mentioned. Since matrix diffusion is one of the key retarding processes in geosphere transport of dissolved radionuclide species, matrix diffusion was consequently taken into account in past performance assessments of radioactive waste repositories in crystalline host rocks. Some issues regarding matrix diffusion are site-specific while others are independent of the specific situation of a planned repository for radioactive wastes. Eight different performance assessments from Finland, Sweden and Switzerland were considered with the aim of finding out how matrix diffusion was addressed, and whether a consistent picture emerges regarding the varying methodology of the different radioactive waste organisations. In the final section of the report some conclusions are drawn and an outlook is given. An extensive bibliography provides the reader with the key papers and reports related to matrix diffusion. (author)
Henry, Kevin; Wood, Nathan J.; Frazier, Tim G.
2017-01-01
Tsunami evacuation planning in coastal communities is typically focused on local events where at-risk individuals must move on foot in a matter of minutes to safety. Less attention has been placed on distant tsunamis, where evacuations unfold over several hours, are often dominated by vehicle use and are managed by public safety officials. Traditional traffic simulation models focus on estimating clearance times but often overlook the influence of varying population demand, alternative modes, background traffic, shadow evacuation, and traffic management alternatives. These factors are especially important for island communities with limited egress options to safety. We use the coastal community of Balboa Island, California (USA), as a case study to explore the range of potential clearance times prior to wave arrival for a distant tsunami scenario. We use a first-in–first-out queuing simulation environment to estimate variations in clearance times, given varying assumptions of the evacuating population (demand) and the road network over which they evacuate (supply). Results suggest clearance times are less than wave arrival times for a distant tsunami, except when we assume maximum vehicle usage for residents, employees, and tourists for a weekend scenario. A two-lane bridge to the mainland was the primary traffic bottleneck, thereby minimizing the effect of departure times, shadow evacuations, background traffic, boat-based evacuations, and traffic light timing on overall community clearance time. Reducing vehicular demand generally reduced clearance time, whereas improvements to road capacity had mixed results. Finally, failure to recognize non-residential employee and tourist populations in the vehicle demand substantially underestimated clearance time.
Chui, Tina Tsz-Ting; Lee, Wen-Chung
2014-01-01
Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption. PMID:25137392
International Nuclear Information System (INIS)
El-Doma, M.
2001-02-01
The stability of the endemic equilibrium of an SIS age-structured epidemic model of a vertically as well as horizontally transmitted disease is investigated when the force of infection is of proportionate mixing assumption type. We also investigate the uniform weak disease persistence. (author)
Energy Technology Data Exchange (ETDEWEB)
Arthur S. Rood; Swen O. Magnuson
2009-07-01
This document is in response to a request by Ming Zhu, DOE-EM to provide a preliminary review of existing models and data used in completed or soon to be completed Performance Assessments and Composite Analyses (PA/CA) documents, to identify codes, methodologies, main assumptions, and key data sets used.
Flache, A; Hegselmann, R
2001-01-01
Three decades of CA-modelling in the social sciences have shown that the cellular automata framework is a useful tool to explore the relationship between micro assumptions and macro outcomes in social dynamics. However, virtually all CA-applications in the social sciences rely on a potentially
Energy Technology Data Exchange (ETDEWEB)
Oladosu, Gbadebo A [ORNL; Kline, Keith L [ORNL
2010-01-01
The primary objective of current U.S. biofuel law the Energy Independence and Security Act of 2007 (EISA) is to reduce dependence on imported oil, but the law also requires biofuels to meet carbon emission reduction thresholds relative to petroleum fuels. EISA created a renewable fuel standard with annual targets for U.S. biofuel use that climb gradually from 9 billion gallons per year in 2008 to 36 billion gallons (or about 136 billion liters) of biofuels per year by 2022. The most controversial aspects of the biofuel policy have centered on the global social and environmental implications of its potential land use effects. In particular, there is an ongoing debate about whether indirect land use change (ILUC) make biofuels a net source, rather sink, of carbon emissions. However, estimates of ILUC induced by biofuel production and use can only be inferred through modeling. This paper evaluates how model structure, underlying assumptions, and the representation of policy instruments influence the results of U.S. biofuel policy simulations. The analysis shows that differences in these factors can lead to divergent model estimates of land use and economic effects. Estimates of the net conversion of forests and grasslands induced by U.S. biofuel policy range from 0.09 ha/1000 gallons described in this paper to 0.73 ha/1000 gallons from early studies in the ILUC change debate. We note that several important factors governing LUC change remain to be examined. Challenges that must be addressed to improve global land use change modeling are highlighted.
Sant, Edda; Hanley, Chris
2018-01-01
Teacher education in England now requires that student teachers follow practices that do not undermine "fundamental British values" where these practices are assessed against a set of ethics and behaviour standards. This paper examines the political assumptions underlying pedagogical interpretations about the education of national…
Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2004-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.
Archibald, Thomas; Sharrock, Guy; Buckley, Jane; Cook, Natalie
2016-12-01
Unexamined and unjustified assumptions are the Achilles' heel of development programs. In this paper, we describe an evaluation capacity building (ECB) approach designed to help community development practitioners work more effectively with assumptions through the intentional infusion of evaluative thinking (ET) into the program planning, monitoring, and evaluation process. We focus specifically on one component of our ET promotion approach involving the creation and analysis of theory of change (ToC) models. We describe our recent efforts to pilot this ET ECB approach with Catholic Relief Services (CRS) in Ethiopia and Zambia. The use of ToC models, plus the addition of ET, is a way to encourage individual and organizational learning and adaptive management that supports more reflective and responsive programming. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
2004-01-01
The overall aim of BIOCLIM is to assess the possible long term impacts due to climate change on the safety of radioactive waste repositories in deep formations. The coarse spatial scale of the Earth-system Models of Intermediate Complexity (EMICs) used in BIOCLIM compared with the BIOCLIM study regions and the needs of performance assessment creates a need for down-scaling. Most of the developmental work on down-scaling methodologies undertaken by the international research community has focused on down-scaling from the general circulation model (GCM) scale (with a typical spatial resolution of 400 km by 400 km over Europe in the current generation of models) using dynamical down-scaling (i.e., regional climate models (RCMs), which typically have a spatial resolution of 50 km by 50 km for models whose domain covers the European region) or statistical methods (which can provide information at the point or station scale) in order to construct scenarios of anthropogenic climate change up to 2100. Dynamical down-scaling (with the MAR RCM) is used in BIOCLIM WP2 to down-scale from the GCM (i.e., IPSL C M4 D ) scale. In the original BIOCLIM description of work, it was proposed that UEA would apply statistical down-scaling to IPSL C M4 D output in WP2 as part of the hierarchical strategy. Statistical down-scaling requires the identification of statistical relationships between the observed large-scale and regional/local climate, which are then applied to large-scale GCM output, on the assumption that these relationships remain valid in the future (the assumption of stationarity). Thus it was proposed that UEA would investigate the extent to which it is possible to apply relationships between the present-day large-scale and regional/local climate to the relatively extreme conditions of the BIOCLIM WP2 snapshot simulations. Potential statistical down-scaling methodologies were identified from previous work performed at UEA. Appropriate station data from the case
The reality behind the assumptions: Modelling and simulation support for the SAAF
CSIR Research Space (South Africa)
Naidoo, K
2015-10-01
Full Text Available : Modelling and simulation support for the SAAF Kavendra Naidoo Military Aerospace Trends & Strategy Military aerospace trends • National security includes other dimensions: social, economic development, environmental, energy security, etc...
Endogenous Fishing Mortality in Life History Models: Relaxing Some Implicit Assumptions
Smith, Martin D.
2007-01-01
Life history models can include a wide range of biological and ecological features that affect exploited fish populations. However, they typically treat fishing mortality as an exogenous parameter. Implicitly, this approach assumes that the supply of fishing effort is perfectly inelastic. That is, the supply curve of effort is vertical. Fishery modelers often run simulations for different values of fishing mortality, but his exercise also assumes vertical supply and simply explores a series o...
Nucleon deep-inelastic structure functions in a quark model with factorizability assumptions
International Nuclear Information System (INIS)
Linkevich, A.D.; Skachkov, N.B.
1979-01-01
Formula for structure functions of deep-inelastic electron scattering on nucleon is derived. For this purpose the dynamic model of factorizing quark amplitudes is used. It has been found that with increase of Q 2 transferred pulse square at great values of x kinemastic variable the decrease of structure function values is observed. At x single values the increase of structure function values is found. The comparison With experimental data shows a good agreement of the model with experiment
Sensitivity of tsunami evacuation modeling to direction and land cover assumptions
Schmidtlein, Mathew C.; Wood, Nathan J.
2015-01-01
Although anisotropic least-cost-distance (LCD) modeling is becoming a common tool for estimating pedestrian-evacuation travel times out of tsunami hazard zones, there has been insufficient attention paid to understanding model sensitivity behind the estimates. To support tsunami risk-reduction planning, we explore two aspects of LCD modeling as it applies to pedestrian evacuations and use the coastal community of Seward, Alaska, as our case study. First, we explore the sensitivity of modeling to the direction of movement by comparing standard safety-to-hazard evacuation times to hazard-to-safety evacuation times for a sample of 3985 points in Seward's tsunami-hazard zone. Safety-to-hazard evacuation times slightly overestimated hazard-to-safety evacuation times but the strong relationship to the hazard-to-safety evacuation times, slightly conservative bias, and shorter processing times of the safety-to-hazard approach make it the preferred approach. Second, we explore how variations in land cover speed conservation values (SCVs) influence model performance using a Monte Carlo approach with one thousand sets of land cover SCVs. The LCD model was relatively robust to changes in land cover SCVs with the magnitude of local model sensitivity greatest in areas with higher evacuation times or with wetland or shore land cover types, where model results may slightly underestimate travel times. This study demonstrates that emergency managers should be concerned not only with populations in locations with evacuation times greater than wave arrival times, but also with populations with evacuation times lower than but close to expected wave arrival times, particularly if they are required to cross wetlands or beaches.
Model of the electric energy market in Poland. Assumptions, structure and operation principles
International Nuclear Information System (INIS)
Kulagowski, W.
1994-01-01
Present state of works on model of electric energy market in Poland with special consideration of bulk energy market is presented. The designed model based on progressive, evolutionary changes is so elastic, that when keeping general structure and fundamentals the particular solutions can be verified or corrected. The changes in the electric energy market are considered as an integral part of existing restructuring process of Polish electric energy sector. The rate of those changes and the mode of their introduction influence on introduction speed of the new solutions. (author). 14 refs, 4 figs
Multiverse Assumptions and Philosophy
Directory of Open Access Journals (Sweden)
James R. Johnson
2018-02-01
Full Text Available Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong topics such as: infinity, duplicate yous, hypothetical fields, more than three space dimensions, Hilbert space, advanced civilizations, and reality established by mathematical relationships. It is easy to confuse multiverse proposals because many divergent models exist. This overview defines the characteristics of eleven popular multiverse proposals. The characteristics compared are: initial conditions, values of constants, laws of nature, number of space dimensions, number of universes, and fine tuning explanations. Future scientific experiments may validate selected assumptions; but until they do, proposals by philosophers may be as valid as theoretical scientific theories.
Kleijnen, J.P.C.
2006-01-01
Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise.By definition, white noise is normally, independently, and identically distributed with zero mean.This survey tries to answer the following questions: (i) How realistic are these
Accurate reduction of a model of circadian rhythms by delayed quasi steady state assumptions
Czech Academy of Sciences Publication Activity Database
Vejchodský, Tomáš
2014-01-01
Roč. 139, č. 4 (2014), s. 577-585 ISSN 0862-7959 Grant - others:European Commission(XE) StochDetBioModel(328008) Program:FP7 Institutional support: RVO:67985840 Keywords : biochemical networks * gene regulatory networks * oscillating systems * periodic solution Subject RIV: BA - General Mathematics http://hdl.handle.net/10338.dmlcz/144135
The reality behind the assumptions: Modelling and simulation support for the SAAF
CSIR Research Space (South Africa)
Naidoo, K
2015-10-01
Full Text Available : Modelling and simulation support for the SAAF Kavendra Naidoo Military Aerospace Trends & Strategy Military aerospace trends • National security includes other dimensions: social, economic development, environmental, energy security, etc... warfare … to this! http://sistemasdearmas.com.br/ggn/ggn07strikewar.html http://xbradtc.com/2013/12/19/cutaway-thursday-saab-gripen/ The Defence Review • DEFENCE SCIENCE, ENGINEERING & TECHNOLOGY CAPABILITY – GUIDELINE • 49. Science, Engineering...
Sensitivity to assumptions in models of generalist predation on a cyclic prey.
Matthiopoulos, Jason; Graham, Kate; Smout, Sophie; Asseburg, Christian; Redpath, Stephen; Thirgood, Simon; Hudson, Peter; Harwood, John
2007-10-01
Ecological theory predicts that generalist predators should damp or suppress long-term periodic fluctuations (cycles) in their prey populations and depress their average densities. However, the magnitude of these impacts is likely to vary depending on the availability of alternative prey species and the nature of ecological mechanisms driving the prey cycles. These multispecies effects can be modeled explicitly if parameterized functions relating prey consumption to prey abundance, and realistic population dynamical models for the prey, are available. These requirements are met by the interaction between the Hen Harrier (Circus cyaneus) and three of its prey species in the United Kingdom, the Meadow Pipit (Anthus pratensis), the field vole (Microtus agrestis), and the Red Grouse (Lagopus lagopus scoticus). We used this system to investigate how the availability of alternative prey and the way in which prey dynamics are modeled might affect the behavior of simple trophic networks. We generated cycles in one of the prey species (Red Grouse) in three different ways: through (1) the interaction between grouse density and macroparasites, (2) the interaction between grouse density and male grouse aggressiveness, and (3) a generic, delayed density-dependent mechanism. Our results confirm that generalist predation can damp or suppress grouse cycles, but only when the densities of alternative prey are low. They also demonstrate that diametrically opposite indirect effects between pairs of prey species can occur together in simple systems. In this case, pipits and grouse are apparent competitors, whereas voles and grouse are apparent facilitators. Finally, we found that the quantitative impacts of the predator on prey density differed among the three models of prey dynamics, and these differences were robust to uncertainty in parameter estimation and environmental stochasticity.
Oceanographic and behavioural assumptions in models of the fate of coral and coral reef fish larvae.
Wolanski, Eric; Kingsford, Michael J
2014-09-06
A predictive model of the fate of coral reef fish larvae in a reef system is proposed that combines the oceanographic processes of advection and turbulent diffusion with the biological process of horizontal swimming controlled by olfactory and auditory cues within the timescales of larval development. In the model, auditory cues resulted in swimming towards the reefs when within hearing distance of the reef, whereas olfactory cues resulted in the larvae swimming towards the natal reef in open waters by swimming against the concentration gradients in the smell plume emanating from the natal reef. The model suggested that the self-seeding rate may be quite large, at least 20% for the larvae of rapidly developing reef fish species, which contrasted with a self-seeding rate less than 2% for non-swimming coral larvae. The predicted self-recruitment rate of reefs was sensitive to a number of parameters, such as the time at which the fish larvae reach post-flexion, the pelagic larval duration of the larvae, the horizontal turbulent diffusion coefficient in reefal waters and the horizontal swimming behaviour of the fish larvae in response to auditory and olfactory cues, for which better field data are needed. Thus, the model suggested that high self-seeding rates for reef fish are possible, even in areas where the 'sticky water' effect is minimal and in the absence of long-term trapping in oceanic fronts and/or large-scale oceanic eddies or filaments that are often argued to facilitate the return of the larvae after long periods of drifting at sea. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
In modelling effects of global warming, invalid assumptions lead to unrealistic projections.
Lefevre, Sjannie; McKenzie, David J; Nilsson, Göran E
2018-02-01
In their recent Opinion, Pauly and Cheung () provide new projections of future maximum fish weight (W ∞ ). Based on criticism by Lefevre et al. (2017) they changed the scaling exponent for anabolism, d G . Here we find that changing both d G and the scaling exponent for catabolism, b, leads to the projection that fish may even become 98% smaller with a 1°C increase in temperature. This unrealistic outcome indicates that the current W ∞ is unlikely to be explained by the Gill-Oxygen Limitation Theory (GOLT) and, therefore, GOLT cannot be used as a mechanistic basis for model projections about fish size in a warmer world. © 2017 John Wiley & Sons Ltd.
Linking assumptions in amblyopia
LEVI, DENNIS M.
2017-01-01
Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956
Pipień, M.
2008-09-01
We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.
International Nuclear Information System (INIS)
FORSYTHE, JAMES C.; WENNER, CAREN A.
1999-01-01
The history of high consequence accidents is rich with events wherein the actions, or inaction, of humans was critical to the sequence of events preceding the accident. Moreover, it has been reported that human error may contribute to 80% of accidents, if not more (dougherty and Fragola, 1988). Within the safety community, this reality is widely recognized and there is a substantially greater awareness of the human contribution to system safety today than has ever existed in the past. Despite these facts, and some measurable reduction in accident rates, when accidents do occur, there is a common lament. No matter how hard we try, we continue to have accidents. Accompanying this lament, there is often bewilderment expressed in statements such as, ''There's no explanation for why he/she did what they did''. It is believed that these statements are a symptom of inadequacies in how they think about humans and their role within technological systems. In particular, while there has never been a greater awareness of human factors, conceptual models of human involvement in engineered systems are often incomplete and in some cases, inaccurate
Energy Technology Data Exchange (ETDEWEB)
FORSYTHE,JAMES C.; WENNER,CAREN A.
1999-11-03
The history of high consequence accidents is rich with events wherein the actions, or inaction, of humans was critical to the sequence of events preceding the accident. Moreover, it has been reported that human error may contribute to 80% of accidents, if not more (dougherty and Fragola, 1988). Within the safety community, this reality is widely recognized and there is a substantially greater awareness of the human contribution to system safety today than has ever existed in the past. Despite these facts, and some measurable reduction in accident rates, when accidents do occur, there is a common lament. No matter how hard we try, we continue to have accidents. Accompanying this lament, there is often bewilderment expressed in statements such as, ''There's no explanation for why he/she did what they did''. It is believed that these statements are a symptom of inadequacies in how they think about humans and their role within technological systems. In particular, while there has never been a greater awareness of human factors, conceptual models of human involvement in engineered systems are often incomplete and in some cases, inaccurate.
Bootstrap prediction and Bayesian prediction under misspecified models
Fushiki, Tadayoshi
2005-01-01
We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...
Vachon, Mary L S
This article reflects on the development and impact of the International Workgroup on Death, Dying and Bereavement's (IWG) pivotal document on The Assumptions and Principles Underlying Standards for Terminal Care. It was at the Ars Moriendi meetings in Columbia, Maryland that the author first met Bob and Bunny Kastenbaum. The meeting led to the development of IWG and the first task of this group was the development of the "Standards" document. The initial document reflected the pioneering work already being done by Kastenbaum and others on the committee and then was formative in the development of other documents such as the National Hospice Association Standards. Participants in the original workgroup were asked for their reflections on the significance of the document and the literature was surveyed to assess the impact of the "Standards" document on the field.
Zigarmi, Drea; Roberts, Taylor Peyton
2017-01-01
Purpose: This study aims to test the following three assertions underlying the Situational Leadership® II (SLII) Model: all four leadership styles are received by followers; all four leadership styles are needed by followers; and if there is a fit between the leadership style a follower receives and needs, that follower will demonstrate favorable…
DEFF Research Database (Denmark)
Wiesen, S.; Fundamenski, W.; Wischmeier, M.
2011-01-01
A revised formulation of the perpendicular diffusive transport model in 2D multi-fluid edge codes is proposed. Based on theoretical predictions and experimental observations a dependence on collisionality is introduced into the transport model of EDGE2D–EIRENE. The impact on time-dependent JET gas...... fuelled ramp-up scenario modelling of the full transient from attached divertor into the high-recycling regime, following a target flux roll over into divertor detachment, ultimately ending in a density limit is presented. A strong dependence on divertor geometry is observed which can mask features...... of the new transport model: a smoothly decaying target recycling flux roll over, an asymmetric drop of temperature and pressure along the field lines as well as macroscopic power dependent plasma oscillations near the density limit which had been previously observed also experimentally. The latter effect...
Partitioning uncertainty in streamflow projections under nonstationary model conditions
Chawla, Ila; Mujumdar, P. P.
2018-02-01
Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them
Post, Ellen S; Grambsch, Anne; Weaver, Chris; Morefield, Philip; Huang, Jin; Leung, Lai-Yung; Nolte, Christopher G; Adams, Peter; Liang, Xin-Zhong; Zhu, Jin-Hong; Mahoney, Hardee
2012-11-01
Future climate change may cause air quality degradation via climate-induced changes in meteorology, atmospheric chemistry, and emissions into the air. Few studies have explicitly modeled the potential relationships between climate change, air quality, and human health, and fewer still have investigated the sensitivity of estimates to the underlying modeling choices. Our goal was to assess the sensitivity of estimated ozone-related human health impacts of climate change to key modeling choices. Our analysis included seven modeling systems in which a climate change model is linked to an air quality model, five population projections, and multiple concentration-response functions. Using the U.S. Environmental Protection Agency's (EPA's) Environmental Benefits Mapping and Analysis Program (BenMAP), we estimated future ozone (O(3))-related health effects in the United States attributable to simulated climate change between the years 2000 and approximately 2050, given each combination of modeling choices. Health effects and concentration-response functions were chosen to match those used in the U.S. EPA's 2008 Regulatory Impact Analysis of the National Ambient Air Quality Standards for O(3). Different combinations of methodological choices produced a range of estimates of national O(3)-related mortality from roughly 600 deaths avoided as a result of climate change to 2,500 deaths attributable to climate change (although the large majority produced increases in mortality). The choice of the climate change and the air quality model reflected the greatest source of uncertainty, with the other modeling choices having lesser but still substantial effects. Our results highlight the need to use an ensemble approach, instead of relying on any one set of modeling choices, to assess the potential risks associated with O(3)-related human health effects resulting from climate change.
Modeling of porous concrete elements under load
Demchyna, B. H.; Famuliak, Yu. Ye.; Demchyna, Kh. B.
2017-12-01
It is known that cell concretes are almost immediately destroyed under load, having reached certain critical stresses. Such kind of destruction is called a "catastrophic failure". Process of crack formation is one of the main factors, influencing process of concrete destruction. Modern theory of crack formation is mainly based on the Griffith theory of destruction. However, the mentioned theory does not completely correspond to the structure of cell concrete with its cell structure, because the theory is intended for a solid body. The article presents one of the possible variants of modelling of the structure of cell concrete and gives some assumptions concerning the process of crack formation in such hollow, not solid environment.
Modeling of porous concrete elements under load
Directory of Open Access Journals (Sweden)
Demchyna B.H.
2017-12-01
Full Text Available It is known that cell concretes are almost immediately destroyed under load, having reached certain critical stresses. Such kind of destruction is called a “catastrophic failure”. Process of crack formation is one of the main factors, influencing process of concrete destruction. Modern theory of crack formation is mainly based on the Griffith theory of destruction. However, the mentioned theory does not completely correspond to the structure of cell concrete with its cell structure, because the theory is intended for a solid body. The article presents one of the possible variants of modelling of the structure of cell concrete and gives some assumptions concerning the process of crack formation in such hollow, not solid environment.
Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.
2014-01-01
Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.
Chemical model reduction under uncertainty
Najm, Habib
2016-01-05
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
Multiverse Assumptions and Philosophy
James R. Johnson
2018-01-01
Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong) topics such as: infinity, duplicate yous, hypothetical fields, mo...
Modelling microstructural evolution under irradiation
International Nuclear Information System (INIS)
Tikare, V.
2015-01-01
Microstructural evolution of materials under irradiation is characterised by some unique features that are not typically present in other application environments. While much understanding has been achieved by experimental studies, the ability to model this microstructural evolution for complex materials states and environmental conditions not only enhances understanding, it also enables prediction of materials behaviour under conditions that are difficult to duplicate experimentally. Furthermore, reliable models enable designing materials for improved engineering performance for their respective applications. Thus, development and application of mesoscale microstructural model are important for advancing nuclear materials technologies. In this chapter, the application of the Potts model to nuclear materials will be reviewed and demonstrated, as an example of microstructural evolution processes. (author)
McKee, M.; Royer, D. L.
2017-12-01
The physiognomy (size and shape) of fossilized leaves has been used to reconstruct the mean annual temperature of ancient environments. Colder temperatures often select for larger and more abundant leaf teeth—serrated edges on leaf margins—as well as a greater degree of leaf dissection. However, to be able to accurately predict paleotemperature from the morphology of fossilized leaves, leaves must be able to react quickly and in a predictable manner to changes in temperature. We examined the extent to which temperature affects leaf morphology in four tree species: Carpinus caroliniana, Acer negundo, Ilex opaca, and Ostrya virginiana. Saplings of these species were grown in two growth cabinets under contrasting temperatures (17 and 25 °C). Compared to the cool treatment, in the warm treatment Carpinus caroliniana leaves had significantly fewer leaf teeth and a lower ratio of total number of leaf teeth to internal perimeter; and Acer negundo leaves had a significantly lower feret diameter ratio (a measure of leaf dissection). In addition, a two-way ANOVA tested the influence of temperature and species on leaf physiognomy. This analysis revealed that all plants, regardless of species, tended to develop more highly dissected leaves with more leaf teeth in the cool treatment. Because the cabinets maintained equivalent moisture, humidity, and CO2 concentration between the two treatments, these results demonstrate that these species could rapidly adapt to changes in temperature. However, not all of the species reacted identically to temperature changes. For example, Acer negundo, Carpinus caroliniana, and Ostrya virginiana all had a higher number of total teeth in the cool treatment compared to the warm treatment, but the opposite was true for Ilex opaca. Our work questions a fundamental assumption common to all models predicting paleotemperature from the physiognomy of fossilized leaves: a given climate will inevitably select for the same leaf physiognomy
Directory of Open Access Journals (Sweden)
Veronika Brandstetter
2015-10-01
Full Text Available In automation plants, technical processes must be conducted in a way that products, substances, or services are produced reliably, with sufficient quality and with minimal strain on resources. A key driver in conducting these processes is the automation plant’s control software, which controls the technical plant components and thereby affects the physical, chemical, and mechanical processes that take place in automation plants. To this end, the control software of an automation plant must adhere to strict process requirements arising from the technical processes, and from the physical plant design. Currently, the validation of the control software often starts late in the engineering process in many cases – once the automation plant is almost completely constructed. However, as widely acknowledged, the later the control software of the automation plant is validated, the higher the effort for correcting revealed defects is, which can lead to serious budget overruns and project delays. In this article we propose an approach that allows the early validation of automation control software against the technical plant processes and assumptions about the physical plant design by means of simulation. We demonstrate the application of our approach on the example of an actual plant project from the automation industry and present it’s technical implementation
Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire
2013-01-01
This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…
Anderson, S A; Gavazzi, S M
1990-09-01
The debate over the usefulness of different family models continues. Recent attention has been paid to comparisons between the Olson Circumplex Model and the Beavers Systems Model. The present study seeks to contribute evidence that bears directly upon one of the most fundamental points of controversy surrounding the Olson model--the linear versus curvilinear nature of the cohesion and adaptability dimensions. A further contribution is an examination of the actual occurrence of the Circumplex Model's extreme types in a clinical population.
Sensitivity Analysis Without Assumptions.
Ding, Peng; VanderWeele, Tyler J
2016-05-01
Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.
Greenland, Sander; Mansournia, Mohammad Ali
2015-10-01
We describe how ordinary interpretations of causal models and causal graphs fail to capture important distinctions among ignorable allocation mechanisms for subject selection or allocation. We illustrate these limitations in the case of random confounding and designs that prevent such confounding. In many experimental designs individual treatment allocations are dependent, and explicit population models are needed to show this dependency. In particular, certain designs impose unfaithful covariate-treatment distributions to prevent random confounding, yet ordinary causal graphs cannot discriminate between these unconfounded designs and confounded studies. Causal models for populations are better suited for displaying these phenomena than are individual-level models, because they allow representation of allocation dependencies as well as outcome dependencies across individuals. Nonetheless, even with this extension, ordinary graphical models still fail to capture distinctions between hypothetical superpopulations (sampling distributions) and observed populations (actual distributions), although potential-outcome models can be adapted to show these distinctions and their consequences.
Tunis, Sandra L
2011-11-01
Canadian patients, healthcare providers and payers share interest in assessing the value of self-monitoring of blood glucose (SMBG) for individuals with type 2 diabetes but not on insulin. Using the UKPDS (UK Prospective Diabetes Study) model, the Canadian Optimal Prescribing and Utilization Service (COMPUS) conducted an SMBG cost-effectiveness analysis. Based on the results, COMPUS does not recommend routine strip use for most adults with type 2 diabetes who are not on insulin. Cost-effectiveness studies require many assumptions regarding cohort, clinical effect, complication costs, etc. The COMPUS evaluation included several conservative assumptions that negatively impacted SMBG cost effectiveness. Current objectives were to (i) review key, impactful COMPUS assumptions; (ii) illustrate how alternative inputs can lead to more favourable results for SMBG cost effectiveness; and (iii) provide recommendations for assessing its long-term value. A summary of COMPUS methods and results was followed by a review of assumptions (for trial-based glycosylated haemoglobin [HbA(1c)] effect, patient characteristics, costs, simulation pathway) and their potential impact. The UKPDS model was used for a 40-year cost-effectiveness analysis of SMBG (1.29 strips per day) versus no SMBG in the Canadian payer setting. COMPUS assumptions for patient characteristics (e.g. HbA(1c) 8.4%), SMBG HbA(1c) advantage (-0.25%) and costs were retained. As with the COMPUS analysis, UKPDS HbA(1c) decay curves were incorporated into SMBG and no-SMBG pathways. An important difference was that SMBG HbA(1c) benefits in the current study could extend beyond the initial simulation period. Sensitivity analyses examined SMBG HbA(1c) advantage, adherence, complication history and cost inputs. Outcomes (discounted at 5%) included QALYs, complication rates, total costs (year 2008 values) and incremental cost-effectiveness ratios (ICERs). The base-case ICER was $Can63 664 per QALY gained; approximately 56% of
Directory of Open Access Journals (Sweden)
Nicolas Haverkamp
2017-10-01
Full Text Available We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM with an unstructured covariance matrix (MLM-UN, MLM with compound-symmetry (MLM-CS and for repeated measures analysis of variance (rANOVA models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes (n = 20 and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement
Haldane model under nonuniform strain
Ho, Yen-Hung; Castro, Eduardo V.; Cazalilla, Miguel A.
2017-10-01
We study the Haldane model under strain using a tight-binding approach, and compare the obtained results with the continuum-limit approximation. As in graphene, nonuniform strain leads to a time-reversal preserving pseudomagnetic field that induces (pseudo-)Landau levels. Unlike a real magnetic field, strain lifts the degeneracy of the zeroth pseudo-Landau levels at different valleys. Moreover, for the zigzag edge under uniaxial strain, strain removes the degeneracy within the pseudo-Landau levels by inducing a tilt in their energy dispersion. The latter arises from next-to-leading order corrections to the continuum-limit Hamiltonian, which are absent for a real magnetic field. We show that, for the lowest pseudo-Landau levels in the Haldane model, the dominant contribution to the tilt is different from graphene. In addition, although strain does not strongly modify the dispersion of the edge states, their interplay with the pseudo-Landau levels is different for the armchair and zigzag ribbons. Finally, we study the effect of strain in the band structure of the Haldane model at the critical point of the topological transition, thus shedding light on the interplay between nontrivial topology and strain in quantum anomalous Hall systems.
Hankin, R K; Britter, R E
1999-05-14
The Major Hazard Assessment Unit of the Health and Safety Executive (HSE) provides advice to local planning authorities on land use planning in the vicinity of major hazard sites. For sites with the potential for large scale releases of toxic heavy gases such as chlorine this advice is based on risk levels and is informed by use of the computerised risk assessment tool RISKAT [C. Nussey, M. Pantony, R. Smallwood, HSE's risk assessment tool RISKAT, Major Hazards: Onshore and Offshore, October, 1992]. At present RISKAT uses consequence models for heavy gas dispersion that assume flat terrain. This paper is the first part of a three part paper. Part 1 describes the mathematical basis of TWODEE, the Health and Safety Laboratory's shallow layer model for heavy gas dispersion. The shallow layer approach used by TWODEE is a compromise between the complexity of CFD models and the simpler integral models. Motivated by the low aspect ratio of typical heavy gas clouds, shallow layer models use depth-averaged variables to describe the flow behaviour. This approach is particularly well suited to assess the effect of complex terrain because the downslope buoyancy force is easily included. Entrainment may be incorporated into a shallow layer model by the use of empirical formulae. Part 2 of this paper presents the numerical scheme used to solve the TWODEE mathematical model, and validated against theoretical results. Part 3 compares the results of the TWODEE model with the experimental results taken at Thorney Island [J. McQuaid, B. Roebuck, The dispersion of heavier-than-air gas from a fenced enclosure. Final report to the US Coast Guard on contract with the Health and Safety Executive, Technical Report RPG 1185, Safety Engineering Laboratory, Research and Laboratory Services Division, Broad Lane, Sheffield S3 7HQ, UK, 1985]. Crown Copyright Copyright 1999 Published by Elsevier Science B.V.
Tatiana Danescu; Ovidiu Spatacean; Paula Nistor; Andrea Cristina Danescu
2010-01-01
Designing and performing analytical procedures aimed to assess the rating of theFinancial Investment Companies are essential activities both in the phase of planning a financialaudit mission and in the phase of issuing conclusions regarding the suitability of using by themanagement and other persons responsible for governance of going concern, as the basis forpreparation and disclosure of financial statements. The paper aims to examine the usefulness ofrecognized models used in the practice o...
On testing the missing at random assumption
DEFF Research Database (Denmark)
Jaeger, Manfred
2006-01-01
Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...... no assumptionson the missingness mechanism, for which we use our recently proposed AI \\& M algorithm. We present experimental results on synthetic data that show that our approximate test statistic is a good indicator for whether data is mar relative to the given distributional assumptions....
Danieli, Coraline; Bossard, Nadine; Roche, Laurent; Belot, Aurelien; Uhry, Zoe; Charvat, Hadrien; Remontet, Laurent
2017-07-01
Net survival, the one that would be observed if the disease under study was the only cause of death, is an important, useful, and increasingly used indicator in public health, especially in population-based studies. Estimates of net survival and effects of prognostic factor can be obtained by excess hazard regression modeling. Whereas various diagnostic tools were developed for overall survival analysis, few methods are available to check the assumptions of excess hazard models. We propose here two formal tests to check the proportional hazard assumption and the validity of the functional form of the covariate effects in the context of flexible parametric excess hazard modeling. These tests were adapted from martingale residual-based tests for parametric modeling of overall survival to allow adding to the model a necessary element for net survival analysis: the population mortality hazard. We studied the size and the power of these tests through an extensive simulation study based on complex but realistic data. The new tests showed sizes close to the nominal values and satisfactory powers. The power of the proportionality test was similar or greater than that of other tests already available in the field of net survival. We illustrate the use of these tests with real data from French cancer registries. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Cox, Emily R; Motheral, Brenda; Mager, Doug
2003-12-01
To verify the gastroprotective agent (GPA) rate assumption used in cost-effectiveness models for cyclo-oxygenase 2 inhibitors (COX-2s) and to re-estimate model outcomes using GPA rates from actual practice. Prescription and medical claims data obtained from January 1, 1999, through May 31, 2001, from a large preferred provider organization in the Midwest, were used to estimate GPA rates within 3 groups of patients aged at least 18 years who were new to nonselective nonsteroidal anti-inflammatory drugs (NSAIDs) and COX-2 therapy: all new NSAID users, new NSAID users with a diagnosis of rheumatoid arthritis (RA) or osteoarthritis (OA), and a matched cohort of new NSAID users. Of the more than 319,000 members with at least 1 day of eligibility, 1900 met the study inclusion criteria for new NSAID users, 289 had a diagnosis of OA or RA, and 1232 were included in the matched cohort. Gastroprotective agent estimates for nonselective NSAID and COX-2 users were consistent across all 3 samples (all new NSAID users, new NSAID users with a diagnosis of OA or RA, and the matched cohort), with COX-2 GPA rates of 22%, 21%, and 20%, and nonselective NSAID GPA rates of 15%, 15%, and 18%, respectively. Re-estimation of the cost-effectiveness model increased the cost per year of life saved for COX-2s from $18,614 to more than $100,000. Contrary to COX-2 cost-effectiveness model assumptions, the rate of GPA use is positive and marginally higher among COX-2 users than among nonselective NSAID users. These findings call into question the use of expert opinion in estimating practice pattern model inputs prior to a product's use in clinical practice. A re-evaluation of COX-2 cost-effectiveness models is warranted.
Testing Our Fundamental Assumptions
Kohler, Susanna
2016-06-01
Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these
Assumptions of Multiple Regression: Correcting Two Misconceptions
Directory of Open Access Journals (Sweden)
Matt N. Williams
2013-09-01
Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.
Generic distortion model for metrology under optical microscopes
Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng
2018-04-01
For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.
Linear regression and the normality assumption.
Schmidt, Amand F; Finan, Chris
2017-12-16
Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.
Consistency of the MLE under mixture models
Chen, Jiahua
2016-01-01
The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...
Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement
Directory of Open Access Journals (Sweden)
Barash Vladimir D.
2016-03-01
Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.
Chemical model reduction under uncertainty
Malpica Galassi, Riccardo
2017-03-06
A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.
Directory of Open Access Journals (Sweden)
P. M. Shkapov
2015-01-01
Full Text Available The paper provides a mathematical model of thermo-gravity convection in a large volume vertical cylinder. The heat is removed from the product via the cooling jacket at the top of the cylinder. We suppose that a laminar fluid motion takes place. The model is based on the NavierStokes equation, the equation of heat transfer through the wall, and the heat transfer equation. The peculiarity of the process in large volume tanks was the distribution of the physical parameters of the coordinates that was taken into account when constructing the model. The model corresponds to a process of wort beer fermentation in the cylindrical-conical tanks (CCT. The CCT volume is divided into three zones and for each zone model equations was obtained. The first zone has an annular cross-section and it is limited to the height by the cooling jacket. In this zone the heat flow from the cooling jacket to the product is uppermost. Model equation of the first zone describes the process of heat transfer through the wall and is presented by linear inhomogeneous differential equation in partial derivatives that is solved analytically. For the second and third zones description there was a number of engineering assumptions. The fluid was considered Newtonian, viscous and incompressible. Convective motion considered in the Boussinesq approximation. The effect of viscous dissipation is not considered. The topology of fluid motion is similar to the cylindrical Poiseuille. The second zone model consists of the Navier-Stokes equations in cylindrical coordinates with the introduction of a simplified and the heat equation in the liquid layer. The volume that is occupied by an upward convective flow pertains to the third area. Convective flows do not mix and do not exchange heat. At the start of the process a medium has the same temperature and a zero initial velocity in the whole volume that allows us to specify the initial conditions for the process. The paper shows the
Energy Technology Data Exchange (ETDEWEB)
Painter, Scott L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division
2016-06-28
The Department of Energy’s Office of Environmental Management recently revised a Remedial Investigation/ Feasibility Study (RI/FS) that included an analysis of subsurface radionuclide transport at a potential new Environmental Management Disposal Facility (EMDF) in East Bear Creek Valley near Oak Ridge, Tennessee. The effect of three simplifying assumptions used in the RI/FS analyses are investigated using the same subsurface pathway conceptualization but with more flexible modeling tools. Neglect of vadose zone dispersion was found to be conservative or non-conservative, depending on the retarded travel time and the half-life. For a given equilibrium distribution coefficient, a relatively narrow range of half-life was identified for which neglect of vadose zone transport is non-conservative and radionuclide discharge into surface water is non-negligible. However, there are two additional conservative simplifications in the reference case that compensate for the non-conservative effect of neglecting vadose zone dispersion: the use of a steady infiltration rate and vadose zone velocity, and the way equilibrium sorption is used to represent transport in the fractured material of the saturated aquifer. With more realistic representations of all three processes, the RI/FS reference case was found to either provide a reasonably good approximation to the peak concentration or was significantly conservative (pessimistic) for all parameter combinations considered.
Modelling soil moisture under different land covers in a sub-humid ...
Indian Academy of Sciences (India)
cipitation/irrigation and yields output of evapo- transpiration and drainage. Spatial (vertical and lateral) variations in properties and processes are ignored and soil moisture content for the layer as a whole is modelled. Accordingly, application of water balance equation to the soil layer under these assumptions for time period ...
About tests of the "simplifying" assumption for conditional copulas
Derumigny, Alexis; Fermanian, Jean-David
2016-01-01
We discuss the so-called “simplifying assumption” of conditional copulas in a general framework. We introduce several tests of the latter assumption for non- and semiparametric copula models. Some related test procedures based on conditioning subsets instead of point-wise events are proposed. The limiting distributions of such test statistics under the null are approximated by several bootstrap schemes, most of them being new. We prove the validity of a particular semiparametric bootstrap sch...
41 CFR 60-3.9 - No assumption of validity.
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true No assumption of validity... assumption of validity. A. Unacceptable substitutes for evidence of validity. Under no circumstances will the... of it's validity be accepted in lieu of evidence of validity. Specifically ruled out are: assumptions...
Teaching the Pursuit of Assumptions
Gardner, Peter; Johnson, Stephen
2015-01-01
Within the school of thought known as Critical Thinking, identifying or finding missing assumptions is viewed as one of the principal thinking skills. Within the new subject in schools and colleges, usually called Critical Thinking, the skill of finding missing assumptions is similarly prominent, as it is in that subject's public examinations. In…
Assumptions for the Annual Energy Outlook 1993
International Nuclear Information System (INIS)
1993-01-01
This report is an auxiliary document to the Annual Energy Outlook 1993 (AEO) (DOE/EIA-0383(93)). It presents a detailed discussion of the assumptions underlying the forecasts in the AEO. The energy modeling system is an economic equilibrium system, with component demand modules representing end-use energy consumption by major end-use sector. Another set of modules represents petroleum, natural gas, coal, and electricity supply patterns and pricing. A separate module generates annual forecasts of important macroeconomic and industrial output variables. Interactions among these components of energy markets generate projections of prices and quantities for which energy supply equals energy demand. This equilibrium modeling system is referred to as the Intermediate Future Forecasting System (IFFS). The supply models in IFFS for oil, coal, natural gas, and electricity determine supply and price for each fuel depending upon consumption levels, while the demand models determine consumption depending upon end-use price. IFFS solves for market equilibrium for each fuel by balancing supply and demand to produce an energy balance in each forecast year
RADIONUCLIDE TRANSPORT MODELS UNDER AMBIENT CONDITIONS
Energy Technology Data Exchange (ETDEWEB)
S. Magnuson
2004-11-01
The purpose of this model report is to document the unsaturated zone (UZ) radionuclide transport model, which evaluates, by means of three-dimensional numerical models, the transport of radioactive solutes and colloids in the UZ, under ambient conditions, from the repository horizon to the water table at Yucca Mountain, Nevada.
Improving the transferability of hydrological model parameters under changing conditions
Huang, Yingchun; Bárdossy, András
2014-05-01
Hydrological models are widely utilized to describe catchment behaviors with observed hydro-meteorological data. Hydrological process may be considered as non-stationary under the changing climate and land use conditions. An applicable hydrological model should be able to capture the essential features of the target catchment and therefore be transferable to different conditions. At present, many model applications based on the stationary assumptions are not sufficient for predicting further changes or time variability. The aim of this study is to explore new model calibration methods in order to improve the transferability of model parameters. To cope with the instability of model parameters calibrated on catchments in non-stationary conditions, we investigate the idea of simultaneously calibration on streamflow records for the period with dissimilar climate characteristics. In additional, a weather based weighting function is implemented to adjust the calibration period to future trends. For regions with limited data and ungauged basins, the common calibration was applied by using information from similar catchments. Result shows the model performance and transfer quantity could be well improved via common calibration. This model calibration approach will be used to enhance regional water management and flood forecasting capabilities.
Extracurricular Business Planning Competitions: Challenging the Assumptions
Watson, Kayleigh; McGowan, Pauric; Smith, Paul
2014-01-01
Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…
Mexican-American Cultural Assumptions and Implications.
Carranza, E. Lou
The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…
Radionuclide Transport Models Under Ambient Conditions
Energy Technology Data Exchange (ETDEWEB)
G. Moridis; Q. Hu
2001-12-20
The purpose of Revision 00 of this Analysis/Model Report (AMR) is to evaluate (by means of 2-D semianalytical and 3-D numerical models) the transport of radioactive solutes and colloids in the unsaturated zone (UZ) under ambient conditions from the potential repository horizon to the water table at Yucca Mountain (YM), Nevada.
Radionuclide Transport Models Under Ambient Conditions
International Nuclear Information System (INIS)
Moridis, G.; Hu, Q.
2001-01-01
The purpose of Revision 00 of this Analysis/Model Report (AMR) is to evaluate (by means of 2-D semianalytical and 3-D numerical models) the transport of radioactive solutes and colloids in the unsaturated zone (UZ) under ambient conditions from the potential repository horizon to the water table at Yucca Mountain (YM), Nevada
Managerial and Organizational Assumptions in the CMM's
DEFF Research Database (Denmark)
Rose, Jeremy; Aaen, Ivan; Nielsen, Peter Axel
2008-01-01
thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...... in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...
Life Support Baseline Values and Assumptions Document
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.
2018-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.
2013-07-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... assumptions--for paying plan benefits under terminating single-employer plans covered by title IV of the... assumptions are intended to reflect current conditions in the financial and annuity markets. Assumptions under...
2013-02-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... assumptions--for paying plan benefits under terminating single-employer plans covered by title IV of the... assumptions are intended to reflect current conditions in the financial and annuity markets. Assumptions under...
The Axioms and Special Assumptions
Borchers, Hans-Jürgen; Sen, Rathindra Nath
For ease of reference, the axioms, the nontriviality assumptions (3.1.10), the definition of a D-set and the special assumptions of Chaps. 5 and 6 are collected together in the following. The verbal explanations that follow the formal definitions a)-f) of (4.2.1) have been omitted. The entries below are numbered as they are in the text. Recall that βC is the subset of the cone C which, in a D-set, is seen to coincide with the boundary of C after the topology is introduced (Sects. 3.2 and 3.2.1).
Challenged assumptions and invisible effects
DEFF Research Database (Denmark)
Wimmelmann, Camilla Lawaetz; Vitus, Kathrine; Jervelund, Signe Smith
2017-01-01
of two complete intervention courses and an analysis of the official intervention documents. Findings – This case study exemplifies how the basic normative assumptions behind an immigrant-oriented intervention and the intrinsic power relations therein may be challenged and negotiated by the participants...
Portfolios: Assumptions, Tensions, and Possibilities.
Tierney, Robert J.; Clark, Caroline; Fenner, Linda; Herter, Roberta J.; Simpson, Carolyn Staunton; Wiser, Bert
1998-01-01
Presents a discussion between two educators of the history, assumptions, tensions, and possibilities surrounding the use of portfolios in multiple classroom contexts. Includes illustrative commentaries that offer alternative perspectives from a range of other educators with differing backgrounds and interests in portfolios. (RS)
Sampling Assumptions in Inductive Generalization
Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.
2012-01-01
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…
Two-fluid model for locomotion under self-confinement
Reigh, Shang Yik; Lauga, Eric
2017-09-01
The bacterium Helicobacter pylori causes ulcers in the stomach of humans by invading mucus layers protecting epithelial cells. It does so by chemically changing the rheological properties of the mucus from a high-viscosity gel to a low-viscosity solution in which it may self-propel. We develop a two-fluid model for this process of swimming under self-generated confinement. We solve exactly for the flow and the locomotion speed of a spherical swimmer located in a spherically symmetric system of two Newtonian fluids whose boundary moves with the swimmer. We also treat separately the special case of an immobile outer fluid. In all cases, we characterize the flow fields, their spatial decay, and the impact of both the viscosity ratio and the degree of confinement on the locomotion speed of the model swimmer. The spatial decay of the flow retains the same power-law decay as for locomotion in a single fluid but with a decreased magnitude. Independent of the assumption chosen to characterize the impact of confinement on the actuation applied by the swimmer, its locomotion speed always decreases with an increase in the degree of confinement. Our modeling results suggest that a low-viscosity region of at least six times the effective swimmer size is required to lead to swimming with speeds similar to locomotion in an infinite fluid, corresponding to a region of size above ≈25 μ m for Helicobacter pylori.
Fiber Bundle Model Under Heterogeneous Loading
Roy, Subhadeep; Goswami, Sanchari
2018-03-01
The present work deals with the behavior of fiber bundle model under heterogeneous loading condition. The model is explored both in the mean-field limit as well as with local stress concentration. In the mean field limit, the failure abruptness decreases with increasing order k of heterogeneous loading. In this limit, a brittle to quasi-brittle transition is observed at a particular strength of disorder which changes with k. On the other hand, the model is hardly affected by such heterogeneity in the limit where local stress concentration plays a crucial role. The continuous limit of the heterogeneous loading is also studied and discussed in this paper. Some of the important results related to fiber bundle model are reviewed and their responses to our new scheme of heterogeneous loading are studied in details. Our findings are universal with respect to the nature of the threshold distribution adopted to assign strength to an individual fiber.
Numerical modeling of materials under extreme conditions
Brown, Eric
2014-01-01
The book presents twelve state of the art contributions in the field of numerical modeling of materials subjected to large strain, high strain rates, large pressure and high stress triaxialities, organized into two sections. The first part is focused on high strain rate-high pressures such as those occurring in impact dynamics and shock compression related phenomena, dealing with material response identification, advanced modeling incorporating microstructure and damage, stress waves propagation in solids and structures response under impact. The latter part is focused on large strain-low strain rates applications such as those occurring in technological material processing, dealing with microstructure and texture evolution, material response at elevated temperatures, structural behavior under large strain and multi axial state of stress.
Modeling of STATCOM under different loading conditions
DEFF Research Database (Denmark)
George, G.J.; Ramachandran, Rakesh; Kowsalya, M.
2012-01-01
This paper deals with the study and analysis of Flexible AC Transmission Systems (FACTS), mainly the modeling of STATCOM. Reactive Power Compensation plays a very important role in the transmission of Electric Power. A comparative study of how the reactive power is injected into the transmission ...... system with and without STATCOM under different loading condition is also illustrated in this paper. Simulations are performed using MATLAB/SIMULINK software....
29 CFR 1607.9 - No assumption of validity.
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false No assumption of validity. 1607.9 Section 1607.9 Labor... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.9 No assumption of validity. A. Unacceptable substitutes for evidence of validity. Under no circumstances will the general reputation of a test or other...
On testing the missing at random assumption
DEFF Research Database (Denmark)
Jaeger, Manfred
2006-01-01
Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption...
Graphical models for inference under outcome-dependent sampling
DEFF Research Database (Denmark)
Didelez, V; Kreiner, S; Keiding, N
2010-01-01
We consider situations where data have been collected such that the sampling depends on the outcome of interest and possibly further covariates, as for instance in case-control studies. Graphical models represent assumptions about the conditional independencies among the variables. By including...
Complex networks under dynamic repair model
Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao
2018-01-01
Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.
Assumptions for the Annual Energy Outlook 1992
International Nuclear Information System (INIS)
1992-01-01
This report serves a auxiliary document to the Energy Information Administration (EIA) publication Annual Energy Outlook 1992 (AEO) (DOE/EIA-0383(92)), released in January 1992. The AEO forecasts were developed for five alternative cases and consist of energy supply, consumption, and price projections by major fuel and end-use sector, which are published at a national level of aggregation. The purpose of this report is to present important quantitative assumptions, including world oil prices and macroeconomic growth, underlying the AEO forecasts. The report has been prepared in response to external requests, as well as analyst requirements for background information on the AEO and studies based on the AEO forecasts
Stress-reducing preventive maintenance model for a unit under stressful environment
International Nuclear Information System (INIS)
Park, J.H.; Chang, Woojin; Lie, C.H.
2012-01-01
We develop a preventive maintenance (PM) model for a unit operated under stressful environment. The PM model in this paper consists of a failure rate model and two cost models to determine the optimal PM scheduling which minimizes a cost rate. The assumption for the proposed model is that stressful environment accelerates the failure of the unit and periodic maintenances reduce stress from outside. The failure rate model handles the maintenance effect of PM using improvement and stress factors. The cost models are categorized into two failure recognition cases: immediate failure recognition and periodic failure detection. The optimal PM scheduling is obtained by considering the trade-off between the related cost and the lifetime of a unit in our model setting. The practical usage of our proposed model is tested through a numerical example.
An estimator of the survival function based on the semi-Markov model under dependent censorship.
Lee, Seung-Yeoun; Tsai, Wei-Yann
2005-06-01
Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.
Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.
Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander
2017-11-01
Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Radionuclide Transport Models Under Ambient Conditions
Energy Technology Data Exchange (ETDEWEB)
G. Moridis; Q. Hu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to evaluate (by means of 2-D semianalytical and 3-D numerical models) the transport of radioactive solutes and colloids in the unsaturated zone (UZ) under ambient conditions from the potential repository horizon to the water table at Yucca Mountain (YM), Nevada. This is in accordance with the ''AMR Development Plan U0060, Radionuclide Transport Models Under Ambient Conditions'' (CRWMS M and O 1999a). This AMR supports the UZ Flow and Transport Process Model Report (PMR). This AMR documents the UZ Radionuclide Transport Model (RTM). This model considers: the transport of radionuclides through fractured tuffs; the effects of changes in the intensity and configuration of fracturing from hydrogeologic unit to unit; colloid transport; physical and retardation processes and the effects of perched water. In this AMR they document the capabilities of the UZ RTM, which can describe flow (saturated and/or unsaturated) and transport, and accounts for (a) advection, (b) molecular diffusion, (c) hydrodynamic dispersion (with full 3-D tensorial representation), (d) kinetic or equilibrium physical and/or chemical sorption (linear, Langmuir, Freundlich or combined), (e) first-order linear chemical reaction, (f) radioactive decay and tracking of daughters, (g) colloid filtration (equilibrium, kinetic or combined), and (h) colloid-assisted solute transport. Simulations of transport of radioactive solutes and colloids (incorporating the processes described above) from the repository horizon to the water table are performed to support model development and support studies for Performance Assessment (PA). The input files for these simulations include transport parameters obtained from other AMRs (i.e., CRWMS M and O 1999d, e, f, g, h; 2000a, b, c, d). When not available, the parameter values used are obtained from the literature. The results of the simulations are used to evaluate the transport of radioactive
Radionuclide Transport Models Under Ambient Conditions
International Nuclear Information System (INIS)
Moridis, G.; Hu, Q.
2000-01-01
The purpose of this Analysis/Model Report (AMR) is to evaluate (by means of 2-D semianalytical and 3-D numerical models) the transport of radioactive solutes and colloids in the unsaturated zone (UZ) under ambient conditions from the potential repository horizon to the water table at Yucca Mountain (YM), Nevada. This is in accordance with the ''AMR Development Plan U0060, Radionuclide Transport Models Under Ambient Conditions'' (CRWMS M and O 1999a). This AMR supports the UZ Flow and Transport Process Model Report (PMR). This AMR documents the UZ Radionuclide Transport Model (RTM). This model considers: the transport of radionuclides through fractured tuffs; the effects of changes in the intensity and configuration of fracturing from hydrogeologic unit to unit; colloid transport; physical and retardation processes and the effects of perched water. In this AMR they document the capabilities of the UZ RTM, which can describe flow (saturated and/or unsaturated) and transport, and accounts for (a) advection, (b) molecular diffusion, (c) hydrodynamic dispersion (with full 3-D tensorial representation), (d) kinetic or equilibrium physical and/or chemical sorption (linear, Langmuir, Freundlich or combined), (e) first-order linear chemical reaction, (f) radioactive decay and tracking of daughters, (g) colloid filtration (equilibrium, kinetic or combined), and (h) colloid-assisted solute transport. Simulations of transport of radioactive solutes and colloids (incorporating the processes described above) from the repository horizon to the water table are performed to support model development and support studies for Performance Assessment (PA). The input files for these simulations include transport parameters obtained from other AMRs (i.e., CRWMS M and O 1999d, e, f, g, h; 2000a, b, c, d). When not available, the parameter values used are obtained from the literature. The results of the simulations are used to evaluate the transport of radioactive solutes and colloids, and
Directory of Open Access Journals (Sweden)
Simon eNielsen
2015-01-01
Full Text Available We examined the effects of normal ageing on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive ageing affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modelling (SEM; Model 2, informed by functional structures that were modelled with path analyses in SEM (Model 1. The results show that ageing effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM capacity (Model 2. These results are consistent with some studies reporting selective ageing effects on processing speed, and inconsistent with other studies reporting ageing effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive ageing effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.
Morrison, Diane M; Golder, Seana; Keller, Thomas E; Gillmore, Mary Rogers
2002-09-01
The theory of reasoned action (TRA) is used to model decisions about substance use among young mothers who became premaritally pregnant at age 17 or younger. The results of structural equation modeling to test the TRA indicated that most relationships specified by the model were significant and in the predicted direction. Attitude was a stronger predictor of intention than norm, but both were significantly related to intention, and intention was related to actual marijuana use 6 months later. Outcome beliefs were bidimensional, and positive outcome beliefs, but not negative beliefs, were significantly related to attitude. Prior marijuana use was only partially mediated by the TRA variables; it also was directly related to intentions to use marijuana and to subsequent use.
2010-11-15
... interest assumptions under the regulation for valuation dates in December 2010. Interest assumptions are...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...
Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model
International Nuclear Information System (INIS)
Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.
2002-01-01
We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well
Modelling human eye under blast loading.
Esposito, L; Clemente, C; Bonora, N; Rossi, T
2015-01-01
Primary blast injury (PBI) is the general term that refers to injuries resulting from the mere interaction of a blast wave with the body. Although few instances of primary ocular blast injury, without a concomitant secondary blast injury from debris, are documented, some experimental studies demonstrate its occurrence. In order to investigate PBI to the eye, a finite element model of the human eye using simple constitutive models was developed. The material parameters were calibrated by a multi-objective optimisation performed on available eye impact test data. The behaviour of the human eye and the dynamics of mechanisms occurring under PBI loading conditions were modelled. For the generation of the blast waves, different combinations of explosive (trinitrotoluene) mass charge and distance from the eye were analysed. An interpretation of the resulting pressure, based on the propagation and reflection of the waves inside the eye bulb and orbit, is proposed. The peculiar geometry of the bony orbit (similar to a frustum cone) can induce a resonance cavity effect and generate a pressure standing wave potentially hurtful for eye tissues.
Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions
Wang, XiaoCong; Liu, YiMin; Bao, Qing
2016-01-01
Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and
Multitask Quantile Regression under the Transnormal Model.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2016-01-01
We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.
International Nuclear Information System (INIS)
Maevskii, K. K.; Kinelovskii, S. A.
2015-01-01
The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described
Wang, Xiao-yang; Zhao, Nan; Chen, Nan; Zhu, Chang-hua; Pei, Chang-xing
2018-01-01
In free space quantum channel, with the introduction and implementation of the satellite-ground link transmission, the researches of single-photon transmission have attracted great interest. We propose a single-photon receiving model and analyze the influence of the atmospheric turbulence on the single-photon transmission. We obtain the relationship between single-photon receiving efficiency and atmospheric turbulence, and analyze the influence of the atmospheric turbulence on the quantum channel performance by the single-photon counting. Finally, we present a reasonable simulation analysis. Simulation results show that as the strength of the atmospheric fluctuations increases, the counting distribution gradually broadens, and the utilization of quantum channel drops. Furthermore, the key generation rate and transmission distance decreases sharply in the case of strong turbulence.
Directory of Open Access Journals (Sweden)
Luca eCaricchi
2016-04-01
Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.
2012-12-14
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2013-01-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2013-10-22
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2012-11-16
... regulation for valuation dates in December 2012. The interest assumptions are used for paying benefits under... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2012-04-13
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in May... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2013-08-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2012-10-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
Bioprocess optimization under uncertainty using ensemble modeling
Liu, Yang; Gunawan, Rudiyanto
2017-01-01
The performance of model-based bioprocess optimizations depends on the accuracy of the mathematical model. However, models of bioprocesses often have large uncertainty due to the lack of model identifiability. In the presence of such uncertainty, process optimizations that rely on the predictions of a single “best fit” model, e.g. the model resulting from a maximum likelihood parameter estimation using the available process data, may perform poorly in real life. In this study, we employed ens...
Rinderer, M.; van Meerveld, H.J.; Seibert, J.
2014-01-01
Topographic indices like the Topographic Wetness Index (TWI) have been used to predict spatial patterns of average groundwater levels and to model the dynamics of the saturated zone during events (e.g., TOPMODEL). However, the assumptions underlying the use of the TWI in hydrological models, of
A novel modeling approach for job shop scheduling problem under uncertainty
Directory of Open Access Journals (Sweden)
Behnam Beheshti Pur
2013-11-01
Full Text Available When aiming on improving efficiency and reducing cost in manufacturing environments, production scheduling can play an important role. Although a common workshop is full of uncertainties, when using mathematical programs researchers have mainly focused on deterministic problems. After briefly reviewing and discussing popular modeling approaches in the field of stochastic programming, this paper proposes a new approach based on utility theory for a certain range of problems and under some practical assumptions. Expected utility programming, as the proposed approach, will be compared with the other well-known methods and its meaningfulness and usefulness will be illustrated via a numerical examples and a real case.
Nonlinear dynamics in work groups with Bion's basic assumptions.
Dal Forno, Arianna; Merlone, Ugo
2013-04-01
According to several authors Bion's contribution has been a landmark in the thought and conceptualization of the unconscious functioning of human beings in groups. We provide a mathematical model of group behavior in which heterogeneous members may behave as if shared to different degrees what in Bion's theory is a common basic assumption. Our formalization combines both individual characteristics and group dynamics. By this formalization we analyze the group dynamics as the result of the individual dynamics of the members and prove that, under some conditions, each individual reproduces the group dynamics in a different scale. In particular, we provide an example in which the chaotic behavior of the group is reflected in each member.
Different Random Distributions Research on Logistic-Based Sample Assumption
Directory of Open Access Journals (Sweden)
Jing Pan
2014-01-01
Full Text Available Logistic-based sample assumption is proposed in this paper, with a research on different random distributions through this system. It provides an assumption system of logistic-based sample, including its sample space structure. Moreover, the influence of different random distributions for inputs has been studied through this logistic-based sample assumption system. In this paper, three different random distributions (normal distribution, uniform distribution, and beta distribution are used for test. The experimental simulations illustrate the relationship between inputs and outputs under different random distributions. Thereafter, numerical analysis infers that the distribution of outputs depends on that of inputs to some extent, and this assumption system is not independent increment process, but it is quasistationary.
Bioprocess optimization under uncertainty using ensemble modeling.
Liu, Yang; Gunawan, Rudiyanto
2017-02-20
The performance of model-based bioprocess optimizations depends on the accuracy of the mathematical model. However, models of bioprocesses often have large uncertainty due to the lack of model identifiability. In the presence of such uncertainty, process optimizations that rely on the predictions of a single "best fit" model, e.g. the model resulting from a maximum likelihood parameter estimation using the available process data, may perform poorly in real life. In this study, we employed ensemble modeling to account for model uncertainty in bioprocess optimization. More specifically, we adopted a Bayesian approach to define the posterior distribution of the model parameters, based on which we generated an ensemble of model parameters using a uniformly distributed sampling of the parameter confidence region. The ensemble-based process optimization involved maximizing the lower confidence bound of the desired bioprocess objective (e.g. yield or product titer), using a mean-standard deviation utility function. We demonstrated the performance and robustness of the proposed strategy in an application to a monoclonal antibody batch production by mammalian hybridoma cell culture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Assumptions and Policy Decisions for Vital Area Identification Analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Myungsu; Bae, Yeon-Kyoung; Lee, Youngseung [KHNP CRI, Daejeon (Korea, Republic of)
2016-10-15
U.S. Nuclear Regulatory Commission and IAEA guidance indicate that certain assumptions and policy questions should be addressed to a Vital Area Identification (VAI) process. Korea Hydro and Nuclear Power conducted a VAI based on current Design Basis Threat and engineering judgement to identify APR1400 vital areas. Some of the assumptions were inherited from Probabilistic Safety Assessment (PSA) as a sabotage logic model was based on PSA logic tree and equipment location data. This paper illustrates some important assumptions and policy decisions for APR1400 VAI analysis. Assumptions and policy decisions could be overlooked at the beginning stage of VAI, however they should be carefully reviewed and discussed among engineers, plant operators, and regulators. Through APR1400 VAI process, some of the policy concerns and assumptions for analysis were applied based on document research and expert panel discussions. It was also found that there are more assumptions to define for further studies for other types of nuclear power plants. One of the assumptions is mission time, which was inherited from PSA.
Evidential Model Validation under Epistemic Uncertainty
Directory of Open Access Journals (Sweden)
Wei Deng
2018-01-01
Full Text Available This paper proposes evidence theory based methods to both quantify the epistemic uncertainty and validate computational model. Three types of epistemic uncertainty concerning input model data, that is, sparse points, intervals, and probability distributions with uncertain parameters, are considered. Through the proposed methods, the given data will be described as corresponding probability distributions for uncertainty propagation in the computational model, thus, for the model validation. The proposed evidential model validation method is inspired by the idea of Bayesian hypothesis testing and Bayes factor, which compares the model predictions with the observed experimental data so as to assess the predictive capability of the model and help the decision making of model acceptance. Developed by the idea of Bayes factor, the frame of discernment of Dempster-Shafer evidence theory is constituted and the basic probability assignment (BPA is determined. Because the proposed validation method is evidence based, the robustness of the result can be guaranteed, and the most evidence-supported hypothesis about the model testing will be favored by the BPA. The validity of proposed methods is illustrated through a numerical example.
Modelling of diurnal cycle under climate change
Energy Technology Data Exchange (ETDEWEB)
Eliseev, A.V.; Bezmenov, K.V.; Demchenko, P.F.; Mokhov, I.I.; Petoukhov, V.K. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Atmospheric Physics
1995-12-31
The observed diurnal temperature range (DTR) displays remarkable change during last 30 years. Land air DTR generally decreases under global climate warming due to more significant night minimum temperature increase in comparison with day maximum temperature increase. Atmosphere hydrological cycle characteristics change under global warming and possible background aerosol atmosphere content change may cause essential errors in the estimation of DTR tendencies of change under global warming. The result of this study is the investigation of cloudiness effect on the DTR and blackbody radiative emissivity diurnal range. It is shown that in some cases (particularly in cold seasons) it results in opposite change in DTR and BD at doubled CO{sub 2} atmosphere content. The influence of background aerosol is the same as the cloudiness one
Schaefli, Bettina
2015-04-01
Hydropower is a pillar for renewable electricity production in almost all world regions. The planning horizon of major hydropower infrastructure projects stretches over several decades and consideration of evolving climatic conditions plays an ever increasing role. This review of model-based climate change impact assessments provides a synthesis of the wealth of underlying modelling assumptions, highlights the importance of local factors and attempts to identify the most urgent open questions. Based on existing case studies, it critically discusses whether current hydro-climatic modelling frameworks are likely to provide narrow enough water scenario ranges to be included into economic analyses for end-to-end climate change impact assessments including electricity market models. This will be completed with an overview of not or indirectly climate-related boundary conditions, such as economic growth, legal constraints, national subsidy frameworks or growing competition for water, which might locally largely outweigh any climate change impacts.
Modeling of Current Transformers Under Saturation Conditions
Directory of Open Access Journals (Sweden)
Martin Prochazka
2006-01-01
Full Text Available During a short circuit the input signal of the relay can be distort by the magnetic core saturation of the current transformer. It is useful to verify the behavior of CT by a mathematical model. The paper describes one phase and three phase models and it presents some methods of how to analyze and classify a deformed secondary current
On the Necessary and Sufficient Assumptions for UC Computation
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Nielsen, Jesper Buus; Orlandi, Claudio
2010-01-01
-transfer protocol for the stand-alone model. Since a KRA where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the KRA model: any non-trivial KRA is sufficient for UC computation. • We show......We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the uniform random string model and the key-registration authority model (KRA), and provide new results...... for all of them. Perhaps most interestingly we show that: • For even the minimal meaningful KRA, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious...
How Symmetrical Assumptions Advance Strategic Management Research
DEFF Research Database (Denmark)
Foss, Nicolai Juul; Hallberg, Hallberg
2014-01-01
We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other...... application domains of the theory. We argue that assumptional symmetry leads to theoretical advancement by promoting the development of theory with greater falsifiability and stronger ontological grounding. Thus, strategic management theory may be advanced by systematically searching for asymmetrical...
Peacebuilding: assumptions, practices and critiques
Directory of Open Access Journals (Sweden)
Cravo, Teresa Almeida
2017-05-01
Full Text Available Peacebuilding has become a guiding principle of international intervention in the periphery since its inclusion in the Agenda for Peace of the United Nations in 1992. The aim of creating the conditions for a self-sustaining peace in order to prevent a return to armed conflict is, however, far from easy or consensual. The conception of liberal peace proved particularly limited, and inevitably controversial, and the reality of war-torn societies far more complex than anticipated by international actors that today assume activities in the promotion of peace in post-conflict contexts. With a trajectory full of contested successes and some glaring failures, the current model has been the target of harsh criticism and widespread scepticism. This article critically examines the theoretical background and practicalities of peacebuilding, exploring its ambition as well as the weaknesses of the paradigm adopted by the international community since the 1990s.
Directory of Open Access Journals (Sweden)
Yanping Huang
Full Text Available A key problem in neuroscience is understanding how the brain makes decisions under uncertainty. Important insights have been gained using tasks such as the random dots motion discrimination task in which the subject makes decisions based on noisy stimuli. A descriptive model known as the drift diffusion model has previously been used to explain psychometric and reaction time data from such tasks but to fully explain the data, one is forced to make ad-hoc assumptions such as a time-dependent collapsing decision boundary. We show that such assumptions are unnecessary when decision making is viewed within the framework of partially observable Markov decision processes (POMDPs. We propose an alternative model for decision making based on POMDPs. We show that the motion discrimination task reduces to the problems of (1 computing beliefs (posterior distributions over the unknown direction and motion strength from noisy observations in a bayesian manner, and (2 selecting actions based on these beliefs to maximize the expected sum of future rewards. The resulting optimal policy (belief-to-action mapping is shown to be equivalent to a collapsing decision threshold that governs the switch from evidence accumulation to a discrimination decision. We show that the model accounts for both accuracy and reaction time as a function of stimulus strength as well as different speed-accuracy conditions in the random dots task.
Huang, Yanping; Rao, Rajesh P N
2013-01-01
A key problem in neuroscience is understanding how the brain makes decisions under uncertainty. Important insights have been gained using tasks such as the random dots motion discrimination task in which the subject makes decisions based on noisy stimuli. A descriptive model known as the drift diffusion model has previously been used to explain psychometric and reaction time data from such tasks but to fully explain the data, one is forced to make ad-hoc assumptions such as a time-dependent collapsing decision boundary. We show that such assumptions are unnecessary when decision making is viewed within the framework of partially observable Markov decision processes (POMDPs). We propose an alternative model for decision making based on POMDPs. We show that the motion discrimination task reduces to the problems of (1) computing beliefs (posterior distributions) over the unknown direction and motion strength from noisy observations in a bayesian manner, and (2) selecting actions based on these beliefs to maximize the expected sum of future rewards. The resulting optimal policy (belief-to-action mapping) is shown to be equivalent to a collapsing decision threshold that governs the switch from evidence accumulation to a discrimination decision. We show that the model accounts for both accuracy and reaction time as a function of stimulus strength as well as different speed-accuracy conditions in the random dots task.
Taylor, Maureen; Kent, Michael L.
1999-01-01
Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…
2010-10-15
...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in November 2010... title IV of the Employee Retirement Income Security Act of 1974. ] PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2011-01-14
...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in February 2011... title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2012-05-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June... title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in the... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2012-02-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...
2012-07-13
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...
2011-07-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...
2013-11-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in the... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2011-02-15
...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in March 2011... title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...
2012-08-15
... to prescribe interest assumptions under the regulation for valuation dates in September 2012. The... interest assumptions are intended to reflect current conditions in the financial and annuity markets... Assets in Single-Employer Plans (29 CFR part 4044) prescribes interest assumptions for valuing benefits...
Wrong assumptions in the financial crisis
Aalbers, M.B.
2009-01-01
Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to
Discourses and Theoretical Assumptions in IT Project Portfolio Management
DEFF Research Database (Denmark)
Hansen, Lars Kristian; Kræmmergaard, Pernille
2014-01-01
DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases......: (1) IT PPM as the top management marketplace, (2) IT PPM as the cause of social dilemmas at the lower organizational levels (3) IT PPM as polity between different organizational interests, (4) IT PPM as power relations that suppress creativity and diversity. Our metaphors can be used by practitioners...
Discourses and Theoretical Assumptions in IT Project Portfolio Management
DEFF Research Database (Denmark)
Hansen, Lars Kristian; Kræmmergaard, Pernille
2014-01-01
articles across various research disciplines. We find and classify a stock of 107 relevant articles into four scientific discourses: the normative, the interpretive, the critical, and the dialogical discourses, as formulated by Deetz (1996). We find that the normative discourse dominates the IT PPM...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases......DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...
Modeling heat stress under different environmental conditions
Carabano, Maria-Jesus; Logar, Betka; Bormann, Jeanne; Minet, Julien; Vanrobays, ML; Diaz, Clara; Tychon, Bernard; Gengler, Nicolas; Hammami, Hedi
2016-01-01
Renewed interest in heat stress effects on livestock productivity derives from climate change, which is expected to increase temperatures and the frequency of extreme weather events. This study aimed at evaluating the effect of temperature and humidity on milk production in highly selected dairy cattle populations across three European regions differing in climate and production systems to detect differences and similarities that can be used to optimize heat stress (HS) effect modeling. Mi...
Erev, Ido; Ert, Eyal; Plonsky, Ori; Cohen, Doron; Cohen, Oded
2017-07-01
Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Relaxing the zero-sum assumption in neutral biodiversity theory
Haegeman, Bart; Etienne, Rampal S.
2008-01-01
The zero-sum assumption is one of the ingredients of the standard neutral model of biodiversity by Hubbell. It states that the community is saturated all the time, which in this model means that the total number of individuals in the community is constant over time, and therefore introduces a
Neural mechanisms and models underlying joint action.
Chersi, Fabian
2011-06-01
Humans, in particular, and to a lesser extent also other species of animals, possess the impressive capability of smoothly coordinating their actions with those of others. The great amount of work done in recent years in neuroscience has provided new insights into the processes involved in joint action, intention understanding, and task sharing. In particular, the discovery of mirror neurons, which fire both when animals execute actions and when they observe the same actions done by other individuals, has shed light on the intimate relationship between perception and action elucidating the direct contribution of motor knowledge to action understanding. Up to date, however, a detailed description of the neural processes involved in these phenomena is still mostly lacking. Building upon data from single neuron recordings in monkeys observing the actions of a demonstrator and then executing the same or a complementary action, this paper describes the functioning of a biologically constraint neural network model of the motor and mirror systems during joint action. In this model, motor sequences are encoded as independent neuronal chains that represent concatenations of elementary motor acts leading to a specific goal. Action execution and recognition are achieved through the propagation of activity within specific chains. Due to the dual property of mirror neurons, the same architecture is capable of smoothly integrating and switching between observed and self-generated action sequences, thus allowing to evaluate multiple hypotheses simultaneously, understand actions done by others, and to respond in an appropriate way.
Modeling heat stress under different environmental conditions.
Carabaño, M J; Logar, B; Bormann, J; Minet, J; Vanrobays, M-L; Díaz, C; Tychon, B; Gengler, N; Hammami, H
2016-05-01
Renewed interest in heat stress effects on livestock productivity derives from climate change, which is expected to increase temperatures and the frequency of extreme weather events. This study aimed at evaluating the effect of temperature and humidity on milk production in highly selected dairy cattle populations across 3 European regions differing in climate and production systems to detect differences and similarities that can be used to optimize heat stress (HS) effect modeling. Milk, fat, and protein test day data from official milk recording for 1999 to 2010 in 4 Holstein populations located in the Walloon Region of Belgium (BEL), Luxembourg (LUX), Slovenia (SLO), and southern Spain (SPA) were merged with temperature and humidity data provided by the state meteorological agencies. After merging, the number of test day records/cows per trait ranged from 686,726/49,655 in SLO to 1,982,047/136,746 in BEL. Values for the daily average and maximum temperature-humidity index (THIavg and THImax) ranges for THIavg/THImax were largest in SLO (22-74/28-84) and shortest in SPA (39-76/46-83). Change point techniques were used to determine comfort thresholds, which differed across traits and climatic regions. Milk yield showed an inverted U-shaped pattern of response across the THI scale with a HS threshold around 73 THImax units. For fat and protein, thresholds were lower than for milk yield and were shifted around 6 THI units toward larger values in SPA compared with the other countries. Fat showed lower HS thresholds than protein traits in all countries. The traditional broken line model was compared with quadratic and cubic fits of the pattern of response in production to increasing heat loads. A cubic polynomial model allowing for individual variation in patterns of response and THIavg as heat load measure showed the best statistical features. Higher/lower producing animals showed less/more persistent production (quantity and quality) across the THI scale. The
Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative
Ahmed, Abdelhamid
2008-01-01
The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…
Modeling interconnect corners under double patterning misalignment
Hyun, Daijoon; Shin, Youngsoo
2016-03-01
Publisher's Note: This paper, originally published on March 16th, was replaced with a corrected/revised version on March 28th. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Interconnect corners should accurately reflect the effect of misalingment in LELE double patterning process. Misalignment is usually considered separately from interconnect structure variations; this incurs too much pessimism and fails to reflect a large increase in total capacitance for asymmetric interconnect structure. We model interconnect corners by taking account of misalignment in conjunction with interconnect structure variations; we also characterize misalignment effect more accurately by handling metal pitch at both sides of a target metal independently. Identifying metal space at both sides of a target metal.
2013-04-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in May... paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...
2013-05-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June... paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...
2011-05-13
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...
Ng, Edmond S-W; Klungel, Olaf H; Groenwold, Rolf H H; van Staa, Tjeerd-Pieter
2015-01-01
Observational drug safety studies may be susceptible to confounding or protopathic bias. This bias may cause a spurious relationship between drug exposure and adverse side effect when none exists and may lead to unwarranted safety alerts. The spurious relationship may manifest itself through substantially different risk levels between exposure groups at the start of follow-up when exposure is deemed too short to have any plausible biological effect of the drug. The restrictive proportional hazards assumption with its arbitrary choice of baseline hazard function renders the commonly used Cox proportional hazards model of limited use for revealing such potential bias. We demonstrate a fully parametric approach using accelerated failure time models with an illustrative safety study of glucose-lowering therapies and show that its results are comparable against other methods that allow time-varying exposure effects. Our approach includes a wide variety of models that are based on the flexible generalized gamma distribution and allows direct comparisons of estimated hazard functions following different exposure-specific distributions of survival times. This approach lends itself to two alternative metrics, namely relative times and difference in times to event, allowing physicians more ways to communicate patient's prognosis without invoking the concept of risks, which some may find hard to grasp. In our illustrative case study, substantial differences in cancer risks at drug initiation followed by a gradual reduction towards null were found. This evidence is compatible with the presence of protopathic bias, in which undiagnosed symptoms of cancer lead to switches in diabetes medication. Copyright © 2015 John Wiley & Sons, Ltd.
Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning
2016-01-01
A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576
Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.
Hsu, Anne; Griffiths, Thomas L
2016-01-01
A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.
Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.
Directory of Open Access Journals (Sweden)
Anne Hsu
Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.
Evaluating The Markov Assumption For Web Usage Mining
DEFF Research Database (Denmark)
Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.
2003-01-01
Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG......) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...... knowledge there has been no systematic study of the validity of the Markov assumption wrt.\\ web usage mining and the resulting quality of the mined browsing patterns. In this paper we systematically investigate the quality of browsing patterns mined from structures based on the Markov assumption. Formal...
Li, Longbiao
2016-06-01
An analytical method has been developed to investigate the effect of interface wear on the tensile strength of carbon fiber - reinforced ceramic - matrix composites (CMCs) under multiple fatigue loading. The Budiansky - Hutchinson - Evans shear - lag model was used to describe the micro stress field of the damaged composite considering fibers failure and the difference existed in the new and original interface debonded region. The statistical matrix multicracking model and fracture mechanics interface debonding criterion were used to determine the matrix crack spacing and interface debonded length. The interface shear stress degradation model and fibers strength degradation model have been adopted to analyze the interface wear effect on the tensile strength of the composite subjected to multiple fatigue loading. Under tensile loading, the fibers failure probabilities were determined by combining the interface wear model and fibers failure model based on the assumption that the fiber strength is subjected to two - parameter Weibull distribution and the loads carried by broken and intact fibers satisfy the Global Load Sharing criterion. The composite can no longer support the applied load when the total loads supported by broken and intact fibers approach its maximum value. The conditions of a single matrix crack and matrix multicrackings for tensile strength corresponding to multiple fatigue peak stress levels and different cycle number have been analyzed.
Distributed automata in an assumption-commitment framework
Indian Academy of Sciences (India)
We model examples like reliable bit transmission and sequence transmission protocols in this framework and discuss how assumption-commitment structure facilitates compositional design of such protocols. We prove a decomposition theorem which states that every protocol speciﬁed globally as a ﬁnite state system can ...
Modeling the release of Escherichia coli from soil into overland flow under raindrop impact
Wang, C.; Parlange, J.-Y.; Rasmussen, E. W.; Wang, X.; Chen, M.; Dahlke, H. E.; Walter, M. T.
2017-08-01
Pathogen transport through the environment is complicated, involving a variety of physical, chemical, and biological processes. This study considered the transfer of microorganisms from soil into overland flow under rain-splash conditions. Although microorganisms are colloidal particles, they are commonly quantified as colony-forming units (CFUs) per volume rather than as a mass or number of particles per volume, which poses a modeling challenge. However, for very small particles that essentially remain suspended after being ejected into ponded water and for which diffusion can be neglected, the Gao model, originally derived for solute transfer from soil, describes particle transfer into suspension and is identical to the Hairsine-Rose particle erosion model for this special application. Small-scale rainfall experiments were conducted in which an Escherichia coli (E. coli) suspension was mixed with a simple soil (9:1 sand-to-clay mass ratio). The model fit the experimental E. coli data. Although re-conceptualizing the Gao solute model as a particle suspension model was convenient for accommodating the unfortunate units of CFU ml-1, the Hairsine-Rose model is insensitive to assumptions about E. coli per CFU as long as the assumed initial mass concentration of E. coli is very small compared to that of the soil particle classes. Although they undoubtedly actively interact with their environment, this study shows that transport of microorganisms from soil into overland storm flows can be reasonably modeled using the same principles that have been applied to small mineral particles in previous studies.
An optimal multiple switching problem under weak assumptions
Directory of Open Access Journals (Sweden)
Imen Hassairi
2017-10-01
Full Text Available This work studies the problem of optimal multiple switching in finite horizon, when the switching costs functions are continous and belong to class D. This problem is solved by means of the Snell envelope of processes.
Pricing Participating Products under a Generalized Jump-Diffusion Model
Directory of Open Access Journals (Sweden)
Tak Kuen Siu
2008-01-01
Full Text Available We propose a model for valuing participating life insurance products under a generalized jump-diffusion model with a Markov-switching compensator. It also nests a number of important and popular models in finance, including the classes of jump-diffusion models and Markovian regime-switching models. The Esscher transform is employed to determine an equivalent martingale measure. Simulation experiments are conducted to illustrate the practical implementation of the model and to highlight some features that can be obtained from our model.
DEFF Research Database (Denmark)
Bastardie, Francois; Nielsen, J. Rasmus; Miethe, Tanja
or to the alteration of individual fishing patterns. We demonstrate that integrating the spatial activity of vessels and local fish stock abundance dynamics allow for interactions and more realistic predictions of fishermen behaviour, revenues and stock abundance......We previously developed a spatially explicit, individual-based model (IBM) evaluating the bio-economic efficiency of fishing vessel movements between regions according to the catching and targeting of different species based on the most recent high resolution spatial fishery data. The main purpose...... was to test the effects of alternative fishing effort allocation scenarios related to fuel consumption, energy efficiency (value per litre of fuel), sustainable fish stock harvesting, and profitability of the fisheries. The assumption here was constant underlying resource availability. Now, an advanced...
2012-01-13
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... terminating single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974... the financial and annuity markets. Assumptions under the benefit payments regulation are updated...
2011-11-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The... financial and annuity markets. Assumptions under the benefit payments regulation are updated monthly. This...
MODELING OF THE BEHAVIOUR REOLOGICHESKIH TEL UNDER DIFFERENT LAW NAGRUZHENIYA
Directory of Open Access Journals (Sweden)
V. V. Bendyukov
2014-01-01
Full Text Available The Offered model of the behaviour reologicheskogo bodies (the viscous-elasticity of the materia, designs or systems under controlling influence of the load, acting on given law for some time.
Mathematical modelling of water radiolysis kinetics under reactor conditions
International Nuclear Information System (INIS)
Khodulev, L.B.; Shapova, E.A.
1989-01-01
Experimental data on coolant radiolysis (RBMK-1000 reactor) were used to construct mathematical model of water radiolysis kinetics under reactor conditions. Good agreement of calculation results with the experiment is noted
Modeling of the bipolar transistor under different pulse ionizing radiations
Antonova, A. M.; Skorobogatov, P. K.
2017-01-01
This paper describes a 2D model of the bipolar transistor 2T312 under gamma, X-ray and laser pulse ionizing radiations. Both the Finite Element Discretization and Semiconductor module of Comsol 5.1 are used. There is an analysis of energy deposition in this device under different radiations and the results of transient ionizing current response for some different conditions.
Propagation of Computer Virus under Human Intervention: A Dynamical Model
Chenquan Gan; Xiaofan Yang; Wanping Liu; Qingyi Zhu; Xulong Zhang
2012-01-01
This paper examines the propagation behavior of computer virus under human intervention. A dynamical model describing the spread of computer virus, under which a susceptible computer can become recovered directly and an infected computer can become susceptible directly, is proposed. Through a qualitative analysis of this model, it is found that the virus-free equilibrium is globally asymptotically stable when the basic reproduction number R0≤1, whereas the viral equilibrium is globally asympt...
Stochastic Online Learning in Dynamic Networks under Unknown Models
2016-08-02
Stochastic Online Learning in Dynamic Networks under Unknown Models This research aims to develop fundamental theories and practical algorithms for...12211 Research Triangle Park, NC 27709-2211 Online learning , multi-armed bandit, dynamic networks REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S... Online Learning in Dynamic Networks under Unknown Models Report Title This research aims to develop fundamental theories and practical algorithms for
DEFF Research Database (Denmark)
Jeong, Cheol-Ho
2011-01-01
resistivity and the absorber thickness on the difference between the two surface reaction models are examined and discussed. For a porous absorber backed by a rigid surface, the assumption of local reaction always underestimates the random incidence absorption coefficient and the local reaction models give......Room surfaces have been extensively modeled as locally reacting in room acoustic predictions although such modeling could yield significant errors under certain conditions. Therefore, this study aims to propose a guideline for adopting the local reaction assumption by comparing predicted random...... incidence acoustical characteristics of typical building elements made of porous materials assuming extended and local reaction. For each surface reaction, five well-established wave propagation models, the Delany-Bazley, Miki, Beranek, Allard-Champoux, and Biot model, are employed. Effects of the flow...
A model for the simulation of the ozone budget in the atmosphere under anthropogenic perturbation
International Nuclear Information System (INIS)
Bestman, A.R.
1989-04-01
A simple model is proposed for the laboratory simulation of the anthropogenic perturbation of the ozone (O 3 ) in the middle atmosphere. It consists of two vertical plates maintained at very high but nearly equal temperatures, in between which is a binary mixture of a chemically reacting fluid. The plates are rotated about a horizontal axis. Adopting the optically thin non-grey gas approximation for radiative heat transfer, the differential equations governing the velocity components, temperature and mass concentration are integrated in a close form. The solutions show good agreement with the exact integral formalism for the radiative flux under the same assumption of nearly equal wall temperatures. (author). 7 refs, 1 fig
A Note on the Fundamental Theorem of Asset Pricing under Model Uncertainty
Directory of Open Access Journals (Sweden)
Erhan Bayraktar
2014-10-01
Full Text Available We show that the recent results on the Fundamental Theorem of Asset Pricing and the super-hedging theorem in the context of model uncertainty can be extended to the case in which the options available for static hedging (hedging options are quoted with bid-ask spreads. In this set-up, we need to work with the notion of robust no-arbitrage which turns out to be equivalent to no-arbitrage under the additional assumption that hedging options with non-zero spread are non-redundant. A key result is the closedness of the set of attainable claims, which requires a new proof in our setting.
A Mathematical Model of Prostate Tumor Growth Under Hormone Therapy with Mutation Inhibitor
Tao, Youshan; Guo, Qian; Aihara, Kazuyuki
2010-04-01
This paper extends Jackson’s model describing the growth of a prostate tumor with hormone therapy to a new one with hypothetical mutation inhibitors. The new model not only considers the mutation by which androgen-dependent (AD) tumor cells mutate into androgen-independent (AI) ones but also introduces inhibition which is assumed to change the mutation rate. The tumor consists of two types of cells (AD and AI) whose proliferation and apoptosis rates are functions of androgen concentration. The mathematical model represents a free-boundary problem for a nonlinear system of parabolic equations, which describe the evolution of the populations of the above two types of tumor cells. The tumor surface is a free boundary, whose velocity is equal to the cell’s velocity there. Global existence and uniqueness of solutions of this model is proved. Furthermore, explicit formulae of tumor volume at any time t are found in androgen-deprived environment under the assumption of radial symmetry, and therefore the dynamics of tumor growth under androgen-deprived therapy could be predicted by these formulae. Qualitative analysis and numerical simulation show that controlling the mutation may improve the effect of hormone therapy or delay a tumor relapse.
Data-driven Modelling for decision making under uncertainty
Angria S, Layla; Dwi Sari, Yunita; Zarlis, Muhammad; Tulus
2018-01-01
The rise of the issues with the uncertainty of decision making has become a very warm conversation in operation research. Many models have been presented, one of which is with data-driven modelling (DDM). The purpose of this paper is to extract and recognize patterns in data, and find the best model in decision-making problem under uncertainty by using data-driven modeling approach with linear programming, linear and nonlinear differential equation, bayesian approach. Model criteria tested to determine the smallest error, and it will be the best model that can be used.
Ernst, Anja F; Albers, Casper J
2017-01-01
Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.
Culturally Biased Assumptions in Counseling Psychology
Pedersen, Paul B.
2003-01-01
Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…
Assumptions of Multiple Regression: Correcting Two Misconceptions
Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason
2013-01-01
In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…
Categorical Judgment Scaling with Ordinal Assumptions.
Hofacker, C F
1984-01-01
One of the most common activities of psychologists and other researchers is to construct Likert scales and then proceed to analyze them as if the numbers constituted an equal interval scale. There are several alternatives to this procedure (Thurstone & Chave, 1929; Muthen, 1983) that make normality assumptions but which do not assume that the answer categories as used by subjects constitute an equal interval scale. In this paper a new alternative is proposed that uses additive conjoint measurement. It is assumed that subjects can report their attitudes towards stimuli in the appropriate rank order. Neither within-subject nor between-subject distributional assumptions are made. Nevertheless, interval level stimulus values, as well as response category boundaries, are extracted by the procedure. This approach is applied to three sets of attitude data. In these three cases, the equal interval assumption is clearly wrong. Despite this, arithmetic means seem to closely reflect group attitudes towards the stimuli. In one data set, the normality assumption of Thurstone and Chave (1929) and Muthen (1983) is supported, and in the two others it is supported with reservations.
Critically Challenging Some Assumptions in HRD
O'Donnell, David; McGuire, David; Cross, Christine
2006-01-01
This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…
Causal Mediation Analysis: Warning! Assumptions Ahead
Keele, Luke
2015-01-01
In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…
A simplified model of choice behavior under uncertainty
Directory of Open Access Journals (Sweden)
Ching-Hung Lin
2016-08-01
Full Text Available The Iowa Gambling Task (IGT has been standardized as a clinical assessment tool (Bechara, 2007. Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU model (Busemeyer and Stout, 2002 to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated the prospect utility (PU models (Ahn et al., 2008 to be more effective than the EU models in the IGT. Nevertheless, after some preliminary tests, we propose that Ahn et al. (2008 PU model is not optimal due to some incompatible results between our behavioral and modeling data. This study aims to modify Ahn et al. (2008 PU model to a simplified model and collected 145 subjects’ IGT performance as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly while α approaching zero. More specifically, we retested the key parameters α, λ , and A in the PU model. Notably, the power of influence of the parameters α, λ, and A has a hierarchical order in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay-loss-shift rather than foreseeing the long-term outcome. However, there still have other behavioral variables that are not well revealed under these dynamic uncertainty situations. Therefore, the optimal behavioral models may not have been found. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated.
Skinfold creep under load of caliper. Linear visco- and poroelastic model simulations.
Nowak, Joanna; Nowak, Bartosz; Kaczmarek, Mariusz
2015-01-01
This paper addresses the diagnostic idea proposed in [11] to measure the parameter called rate of creep of axillary fold of tissue using modified Harpenden skinfold caliper in order to distinguish normal and edematous tissue. Our simulations are intended to help understanding the creep phenomenon and creep rate parameter as a sensitive indicator of edema existence. The parametric analysis shows the tissue behavior under the external load as well as its sensitivity to changes of crucial hydro-mechanical tissue parameters, e.g., permeability or stiffness. The linear viscoelastic and poroelastic models of normal (single phase) and oedematous tissue (twophase: swelled tissue with excess of interstitial fluid) implemented in COMSOL Multiphysics environment are used. Simulations are performed within the range of small strains for a simplified fold geometry, material characterization and boundary conditions. The predicted creep is the result of viscosity (viscoelastic model) or pore fluid displacement (poroelastic model) in tissue. The tissue deformations, interstitial fluid pressure as well as interstitial fluid velocity are discussed in parametric analysis with respect to elasticity modulus, relaxation time or permeability of tissue. The creep rate determined within the models of tissue is compared and referred to the diagnostic idea in [11]. The results obtained from the two linear models of subcutaneous tissue indicate that the form of creep curve and the creep rate are sensitive to material parameters which characterize the tissue. However, the adopted modelling assumptions point to a limited applicability of the creep rate as the discriminant of oedema.
Modeling Root Depth Development with time under some Crop and ...
African Journals Online (AJOL)
Five empirical models for the prediction of root depth developed with time under four combinations of crop and tillage management systems have been developed by the method of polynomial regression. Root depth predictions by a general model were severally correlated with root depth predictions by the ...
Empirical Analysis of Farm Credit Risk under the Structure Model
Yan, Yan
2009-01-01
The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…
Schuler, Eric R; Boals, Adriel
2016-05-01
Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Cascading failures in interdependent systems under a flow redistribution model
Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman
2018-02-01
Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.
Modeling Escherichia coli removal in constructed wetlands under pulse loading.
Hamaamin, Yaseen A; Adhikari, Umesh; Nejadhashemi, A Pouyan; Harrigan, Timothy; Reinhold, Dawn M
2014-03-01
Manure-borne pathogens are a threat to water quality and have resulted in disease outbreaks globally. Land application of livestock manure to croplands may result in pathogen transport through surface runoff and tile drains, eventually entering water bodies such as rivers and wetlands. The goal of this study was to develop a robust model for estimating the pathogen removal in surface flow wetlands under pulse loading conditions. A new modeling approach was used to describe Escherichia coli removal in pulse-loaded constructed wetlands using adaptive neuro-fuzzy inference systems (ANFIS). Several ANFIS models were developed and validated using experimental data under pulse loading over two seasons (winter and summer). In addition to ANFIS, a mechanistic fecal coliform removal model was validated using the same sets of experimental data. The results showed that the ANFIS model significantly improved the ability to describe the dynamics of E. coli removal under pulse loading. The mechanistic model performed poorly as demonstrated by lower coefficient of determination and higher root mean squared error compared to the ANFIS models. The E. coli concentrations corresponding to the inflection points on the tracer study were keys to improving the predictability of the E. coli removal model. Copyright © 2013 Elsevier Ltd. All rights reserved.
The 'revealed preferences' theory: Assumptions and conjectures
International Nuclear Information System (INIS)
Green, C.H.
1983-01-01
Being kind of intuitive psychology the 'Revealed-Preferences'- theory based approaches towards determining the acceptable risks are a useful method for the generation of hypotheses. In view of the fact that reliability engineering develops faster than methods for the determination of reliability aims the Revealed-Preferences approach is a necessary preliminary help. Some of the assumptions on which the 'Revealed-Preferences' theory is based will be identified and analysed and afterwards compared with experimentally obtained results. (orig./DG) [de
How to Handle Assumptions in Synthesis
Directory of Open Access Journals (Sweden)
Roderick Bloem
2014-07-01
Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.
Towards New Probabilistic Assumptions in Business Intelligence
Schumann Andrew; Szelc Andrzej
2015-01-01
One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot ...
About tests of the “simplifying” assumption for conditional copulas
Directory of Open Access Journals (Sweden)
Derumigny Alexis
2017-08-01
Full Text Available We discuss the so-called “simplifying assumption” of conditional copulas in a general framework. We introduce several tests of the latter assumption for non- and semiparametric copula models. Some related test procedures based on conditioning subsets instead of point-wise events are proposed. The limiting distributions of such test statistics under the null are approximated by several bootstrap schemes, most of them being new. We prove the validity of a particular semiparametric bootstrap scheme. Some simulations illustrate the relevance of our results.
One-dimensional models of thermal activation under shear stress
CSIR Research Space (South Africa)
Nabarro, FRN
2003-01-01
Full Text Available - dimensional models presented here may illuminate the study of more realistic models. For the model in which as many dislocations are poised for backward jumps as for forward jumps, the experimental activation volume Vye(C27a) under applied stresses close to C...27a is different from the true activation volume V(C27) evaluated at C27 ?C27a. The relations between the two are developed. A model is then discussed in which fewer dislocations are available for backward than for forward jumps. Finally...
The sufficiency assumption of the reasoned approach to action
Directory of Open Access Journals (Sweden)
David Trafimow
2015-12-01
Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.
Data-driven smooth tests of the proportional hazards assumption
Czech Academy of Sciences Publication Activity Database
Kraus, David
2007-01-01
Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007
An Agent-Based Model of School Closing in Under-Vacccinated Communities During Measles Outbreaks.
Getz, Wayne M; Carlson, Colin; Dougherty, Eric; Porco Francis, Travis C; Salter, Richard
2016-04-01
The winter 2014-15 measles outbreak in the US represents a significant crisis in the emergence of a functionally extirpated pathogen. Conclusively linking this outbreak to decreases in the measles/mumps/rubella (MMR) vaccination rate (driven by anti-vaccine sentiment) is critical to motivating MMR vaccination. We used the NOVA modeling platform to build a stochastic, spatially-structured, individual-based SEIR model of outbreaks, under the assumption that R 0 ≈ 7 for measles. We show this implies that herd immunity requires vaccination coverage of greater than approximately 85%. We used a network structured version of our NOVA model that involved two communities, one at the relatively low coverage of 85% coverage and one at the higher coverage of 95%, both of which had 400-student schools embedded, as well as students occasionally visiting superspreading sites (e.g. high-density theme parks, cinemas, etc.). These two vaccination coverage levels are within the range of values occurring across California counties. Transmission rates at schools and superspreading sites were arbitrarily set to respectively 5 and 15 times background community rates. Simulations of our model demonstrate that a 'send unvaccinated students home' policy in low coverage counties is extremely effective at shutting down outbreaks of measles.
Modeling the Propagation of Mobile Phone Virus under Complex Network
Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei; Yao, Yu
2014-01-01
Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intende...
Physical Modeling of Shear Behavior of Infilled Rock Joints Under CNL and CNS Boundary Conditions
Shrivastava, Amit Kumar; Rao, K. Seshagiri
2018-01-01
Despite their frequent natural occurrence, filled discontinuities under constant normal stiffness (CNS) boundary conditions have been studied much less systematically, perhaps because of the difficulties arising from the increased number of variable parameters. Because of the lack of reliable and realistic theoretical or empirical relations and the difficulties in obtaining and testing representative samples, engineers rely on judgment and often consider the shear strength of the infilled material itself as shear strength of rock joints. This assumption leads to uneconomical and also sometimes the unsafe design of underground structures, slopes, rock-socketed piles and foundations. To study the effect of infill on the shear behavior of rock joints, tests were performed on the modeled infilled rock joint having different joint roughness under constant normal load (CNL) and CNS boundary conditions at various initial normal stress and varying thickness of the infilled material. The test results indicate that shear strength decreases with an increase in t/ a ratio for both CNL and CNS conditions, but the reduction in shear strength is more for CNL than for CNS condition for a given initial normal stress. The detailed account of the effect of thickness of infilled material on shear and deformation behavior of infilled rock joint is discussed in this paper, and a model is proposed to predict shear strength of infilled rock joint.
Directory of Open Access Journals (Sweden)
Yosuke Otsuki
2013-01-01
Full Text Available Double aortic aneurysm (DAA falls under the category of multiple aortic aneurysms. Repair is generally done through staged surgery due to low invasiveness. In this approach, one aneurysm is cured per operation. Therefore, two operations are required for DAA. However, post-first-surgery rupture cases have been reported. Although the problems involved with managing staged surgery have been discussed for more than 30 years, investigation from a hemodynamic perspective has not been attempted. Hence, this is the first computational fluid dynamics approach to the DAA problem. Three idealized geometries were prepared: presurgery, thoracic aortic aneurysm (TAA cured, and abdominal aortic aneurysm (AAA cured. By applying identical boundary conditions for flow rate and pressure, the Navier-Stokes equation and continuity equations were solved under the Newtonian fluid assumption. Average pressure in TAA was increased by AAA repair. On the other hand, average pressure in AAA was decreased after TAA repair. Average wall shear stress was decreased at the peak in post-first-surgery models. However, the wave profile of TAA average wall shear stress was changed in the late systole phase after AAA repair. Since the average wall shear stress in the post-first-surgery models decreased and pressure at TAA after AAA repair increased, the TAA might be treated first to prevent rupture.
Fuzzy techniques for subjective workload-score modeling under uncertainties.
Kumar, Mohit; Arndt, Dagmar; Kreuzfeld, Steffi; Thurow, Kerstin; Stoll, Norbert; Stoll, Regina
2008-12-01
This paper deals with the development of a computer model to estimate the subjective workload score of individuals by evaluating their heart-rate (HR) signals. The identification of a model to estimate the subjective workload score of individuals under different workload situations is too ambitious a task because different individuals (due to different body conditions, emotional states, age, gender, etc.) show different physiological responses (assessed by evaluating the HR signal) under different workload situations. This is equivalent to saying that the mathematical mappings between physiological parameters and the workload score are uncertain. Our approach to deal with the uncertainties in a workload-modeling problem consists of the following steps: 1) The uncertainties arising due the individual variations in identifying a common model valid for all the individuals are filtered out using a fuzzy filter; 2) stochastic modeling of the uncertainties (provided by the fuzzy filter) use finite-mixture models and utilize this information regarding uncertainties for identifying the structure and initial parameters of a workload model; and 3) finally, the workload model parameters for an individual are identified in an online scenario using machine learning algorithms. The contribution of this paper is to propose, with a mathematical analysis, a fuzzy-based modeling technique that first filters out the uncertainties from the modeling problem, analyzes the uncertainties statistically using finite-mixture modeling, and, finally, utilizes the information about uncertainties for adapting the workload model to an individual's physiological conditions. The approach of this paper, demonstrated with the real-world medical data of 11 subjects, provides a fuzzy-based tool useful for modeling in the presence of uncertainties.
2011-10-14
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in...-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The interest... regulation are the same. The interest assumptions are intended to reflect current conditions in the financial...
2011-08-15
... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in...-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The interest... regulation are the same. The interest assumptions are intended to reflect current conditions in the financial...
Road Impedance Model Study under the Control of Intersection Signal
Directory of Open Access Journals (Sweden)
Yunlin Luo
2015-01-01
Full Text Available Road traffic impedance model is a difficult and critical point in urban traffic assignment and route guidance. The paper takes a signalized intersection as the research object. On the basis of traditional traffic wave theory including the implementation of traffic wave model and the analysis of vehicles’ gathering and dissipating, the road traffic impedance model is researched by determining the basic travel time and waiting delay time. Numerical example results have proved that the proposed model in this paper has received better calculation performance compared to existing model, especially in flat hours. The values of mean absolute percentage error (MAPE and mean absolute deviation (MAD are separately reduced by 3.78% and 2.62 s. It shows that the proposed model has feasibility and availability in road traffic impedance under intersection signal.
Asymptotics for Greeks under the constant elasticity of variance model
Kritski, Oleg L.; Zalmezh, Vladimir F.
2017-01-01
This paper is concerned with the asymptotics for Greeks of European-style options and the risk-neutral density function calculated under the constant elasticity of variance model. Formulae obtained help financial engineers to construct a perfect hedge with known behaviour and to price any options on financial assets.
A flexible model for actuarial risks under dependence
Albers, Willem/Wim; Kallenberg, W.C.M.; Lukocius, V.
Methods for computing risk measures, such as stop-loss premiums, tacitly assume independence of the underlying individual risks. This can lead to huge errors even when only small dependencies occur. In the present paper, a general model is developed which covers what happens in practice in a
UNDER GRADUATE RESEARCH An alternative model of doing ...
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. UNDER GRADUATE RESEARCH An alternative model of doing science. The main work force is undergraduate students. Using research as a tool in education. Advantages : High risk tolerance. Infinite energy. Uninhibited lateral thinking. Problems: Japanese ...
Optimization of Weibull deteriorating items inventory model under ...
Indian Academy of Sciences (India)
In this study, we have discussed the development of an inventory model when the deterioration rate of the item follows Weibull two parameter distributions under the effect of selling price and time dependent demand, since, not only the selling price, but also the time is a crucial factor to enhance the demand in the market as ...
Fan, Weihua; Hancock, Gregory R.
2012-01-01
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
A CHF Model in Narrow Gaps under Saturated Boiling
International Nuclear Information System (INIS)
Park, Suki; Kim, Hyeonil; Park, Cheol
2014-01-01
Many researchers have paid a great attention to the CHF in narrow gaps due to enormous industrial applications. Especially, a great number of researches on the CHF have been carried out in relation to nuclear safety issues such as in-vessel retention for nuclear power plants during a severe accident. Analytical studies to predict the CHF in narrow gaps have been also reported. Yu et al. (2012) developed an analytical model to predict the CHF on downward facing and inclined heaters based on the model of Kandlikar et al. (2001) for an upward facing heater. A new theoretical model is developed to predict the CHF in narrow gaps under saturated pool boiling. This model is applicable when one side of coolant channels or both sides are heated including the effects of heater orientation. The present model is compared with the experimental CHF data obtained in narrow gaps. A new analytical CHF model is proposed to predict CHF for narrow gaps under saturated pool boiling. This model can be applied to one-side or two-sides heating surface and also consider the effects of heater orientation on CHF. The present model is compared with the experimental data obtained in narrow gaps with one heater. The comparisons indicate that the present model shows a good agreement with the experimental CHF data in the horizontal annular tubes. However, it generally under-predicts the experimental data in the narrow rectangular gaps except the data obtained in the gap thickness of 10 mm and the horizontal downward facing heater
Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann
2015-01-01
Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.
Verification of the karst flow model under laboratory controlled conditions
Gotovac, Hrvoje; Andric, Ivo; Malenica, Luka; Srzic, Veljko
2016-04-01
Karst aquifers are very important groundwater resources around the world as well as in coastal part of Croatia. They consist of extremely complex structure defining by slow and laminar porous medium and small fissures and usually fast turbulent conduits/karst channels. Except simple lumped hydrological models that ignore high karst heterogeneity, full hydraulic (distributive) models have been developed exclusively by conventional finite element and finite volume elements considering complete karst heterogeneity structure that improves our understanding of complex processes in karst. Groundwater flow modeling in complex karst aquifers are faced by many difficulties such as a lack of heterogeneity knowledge (especially conduits), resolution of different spatial/temporal scales, connectivity between matrix and conduits, setting of appropriate boundary conditions and many others. Particular problem of karst flow modeling is verification of distributive models under real aquifer conditions due to lack of above-mentioned information. Therefore, we will show here possibility to verify karst flow models under the laboratory controlled conditions. Special 3-D karst flow model (5.6*2.6*2 m) consists of concrete construction, rainfall platform, 74 piezometers, 2 reservoirs and other supply equipment. Model is filled by fine sand (3-D porous matrix) and drainage plastic pipes (1-D conduits). This model enables knowledge of full heterogeneity structure including position of different sand layers as well as conduits location and geometry. Moreover, we know geometry of conduits perforation that enable analysis of interaction between matrix and conduits. In addition, pressure and precipitation distribution and discharge flow rates from both phases can be measured very accurately. These possibilities are not present in real sites what this model makes much more useful for karst flow modeling. Many experiments were performed under different controlled conditions such as different
New Assumptions to Guide SETI Research
Colombano, S. P.
2018-01-01
The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.
Kroon, M.
2011-11-01
Rubbers and soft biological tissues may undergo large deformations and are also viscoelastic. The formulation of constitutive models for these materials poses special challenges. In several applications, especially in biomechanics, these materials are also relatively thin, implying that in-plane stresses dominate and that plane stress may therefore be assumed. In the present paper, a constitutive model for viscoelastic materials in the finite strain regime and under the assumption of plane stress is proposed. It is assumed that the relaxation behaviour in the direction of plane stress can be treated separately, which makes it possible to formulate evolution laws for the plastic strains on explicit form at the same time as incompressibility is fulfilled. Experimental results from biomechanics (dynamic inflation of dog aorta) and rubber mechanics (biaxial stretching of rubber sheets) were used to assess the proposed model. The assessment clearly indicates that the model is fully able to predict the experimental outcome for these types of material.
Directory of Open Access Journals (Sweden)
J.K. Jolayemi
2014-01-01
Full Text Available A zero-one mixed integer linear programming model is developed for the scheduling of projects under the condition of inflation and under penalty and reward arrangements. The effects of inflation on time-cost trade-off curves are illustrated and a modified approach to time-cost trade-off analysis presented. Numerical examples are given to illustrate the model and its properties. The examples show that misleading schedules and inaccurate project-cost estimates will be produced if the inflation factor is neglected in an environment of high inflation. They also show that award of penalty or bonus is a catalyst for early completion of a project, just as it can be expected.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Seo, Changwan; Thorne, James H; Hannah, Lee; Thuiller, Wilfried
2009-02-23
Predictions of future species' ranges under climate change are needed for conservation planning, for which species distribution models (SDMs) are widely used. However, global climate model-based (GCM) output grids can bias the area identified as suitable when these are used as SDM predictor variables, because GCM outputs, typically at least 50x50 km, are biologically coarse. We tested the assumption that species ranges can be equally well portrayed in SDMs operating on base data of different grid sizes by comparing SDM performance statistics and area selected by four SDMs run at seven grid sizes, for nine species of contrasting range size. Area selected was disproportionately larger for SDMs run on larger grid sizes, indicating a cut-off point above which model results were less reliable. Up to 2.89 times more species range area was selected by SDMs operating on grids above 50x50 km, compared to SDMs operating at 1 km2. Spatial congruence between areas selected as range also diverged as grid size increased, particularly for species with ranges between 20000 and 90000 km2. These results indicate the need for caution when using such data to plan future protected areas, because an overly large predicted range could lead to inappropriate reserve location selection.
Misleading prioritizations from modelling range shifts under climate change
Sofaer, Helen R.; Jarnevich, Catherine S.; Flather, Curtis H.
2018-01-01
AimConservation planning requires the prioritization of a subset of taxa and geographical locations to focus monitoring and management efforts. Integration of the threats and opportunities posed by climate change often relies on predictions from species distribution models, particularly for assessments of vulnerability or invasion risk for multiple taxa. We evaluated whether species distribution models could reliably rank changes in species range size under climate and land use change.LocationConterminous U.S.A.Time period1977–2014.Major taxa studiedPasserine birds.MethodsWe estimated ensembles of species distribution models based on historical North American Breeding Bird Survey occurrences for 190 songbirds, and generated predictions to recent years given c. 35 years of observed land use and climate change. We evaluated model predictions using standard metrics of discrimination performance and a more detailed assessment of the ability of models to rank species vulnerability to climate change based on predicted range loss, range gain, and overall change in range size.ResultsSpecies distribution models yielded unreliable and misleading assessments of relative vulnerability to climate and land use change. Models could not accurately predict range expansion or contraction, and therefore failed to anticipate patterns of range change among species. These failures occurred despite excellent overall discrimination ability and transferability to the validation time period, which reflected strong performance at the majority of locations that were either always or never occupied by each species.Main conclusionsModels failed for the questions and at the locations of greatest interest to conservation and management. This highlights potential pitfalls of multi-taxa impact assessments under global change; in our case, models provided misleading rankings of the most impacted species, and spatial information about range changes was not credible. As modelling methods and
Explorations in statistics: the assumption of normality.
Curran-Everett, Douglas
2017-09-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This twelfth installment of Explorations in Statistics explores the assumption of normality, an assumption essential to the meaningful interpretation of a t test. Although the data themselves can be consistent with a normal distribution, they need not be. Instead, it is the theoretical distribution of the sample mean or the theoretical distribution of the difference between sample means that must be roughly normal. The most versatile approach to assess normality is to bootstrap the sample mean, the difference between sample means, or t itself. We can then assess whether the distributions of these bootstrap statistics are consistent with a normal distribution by studying their normal quantile plots. If we suspect that an inference we make from a t test may not be justified-if we suspect that the theoretical distribution of the sample mean or the theoretical distribution of the difference between sample means is not normal-then we can use a permutation method to analyze our data. Copyright © 2017 the American Physiological Society.
Modeling ocean wave propagation under sea ice covers
Zhao, Xin; Shen, Hayley H.; Cheng, Sukun
2015-02-01
Operational ocean wave models need to work globally, yet current ocean wave models can only treat ice-covered regions crudely. The purpose of this paper is to provide a brief overview of ice effects on wave propagation and different research methodology used in studying these effects. Based on its proximity to land or sea, sea ice can be classified as: landfast ice zone, shear zone, and the marginal ice zone. All ice covers attenuate wave energy. Only long swells can penetrate deep into an ice cover. Being closest to open water, wave propagation in the marginal ice zone is the most complex to model. The physical appearance of sea ice in the marginal ice zone varies. Grease ice, pancake ice, brash ice, floe aggregates, and continuous ice sheet may be found in this zone at different times and locations. These types of ice are formed under different thermal-mechanical forcing. There are three classic models that describe wave propagation through an idealized ice cover: mass loading, thin elastic plate, and viscous layer models. From physical arguments we may conjecture that mass loading model is suitable for disjoint aggregates of ice floes much smaller than the wavelength, thin elastic plate model is suitable for a continuous ice sheet, and the viscous layer model is suitable for grease ice. For different sea ice types we may need different wave ice interaction models. A recently proposed viscoelastic model is able to synthesize all three classic models into one. Under suitable limiting conditions it converges to the three previous models. The complete theoretical framework for evaluating wave propagation through various ice covers need to be implemented in the operational ocean wave models. In this review, we introduce the sea ice types, previous wave ice interaction models, wave attenuation mechanisms, the methods to calculate wave reflection and transmission between different ice covers, and the effect of ice floe breaking on shaping the sea ice morphology
Modeling the constitutive behavior of RAFM steels under irradiation conditions
Aktaa, J.; Petersen, C.
2011-10-01
A coupled viscoplastic deformation damage model will be presented which is modified to take into account irradiation induced hardening and its recovery due to inelastic deformation and/or high temperature annealing. The model allows the prediction of the constitutive behavior of RAFM steels under arbitrary creep-fatigue and irradiation loading conditions. It can be implemented in commercial finite element codes and thus be used for the lifetime assessment of fusion reactor components. The model is applied to describe the behavior of the RAFM steels, EUROFER 97 and F82H mod, observed in post irradiation examinations of the irradiation programs ARBOR I and ARBOR II. Data from their tensile and low cycle fatigue tests were used to determine the material and temperature dependent parameters of the model and to verify its prediction capability.
HYPROLOG: A New Logic Programming Language with Assumptions and Abduction
DEFF Research Database (Denmark)
Christiansen, Henning; Dahl, Veronica
2005-01-01
. The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...... with the grammar notation provided by the underlying Prolog system. An operational semantics is given which complies with standard declarative semantics for the ``pure'' sublanguages, while for the full HYPROLOG language, it must be taken as definition. The implementation is straightforward and seems to provide...
Modeling protein network evolution under genome duplication and domain shuffling
Directory of Open Access Journals (Sweden)
Isambert Hervé
2007-11-01
Full Text Available Abstract Background Successive whole genome duplications have recently been firmly established in all major eukaryote kingdoms. Such exponential evolutionary processes must have largely contributed to shape the topology of protein-protein interaction (PPI networks by outweighing, in particular, all time-linear network growths modeled so far. Results We propose and solve a mathematical model of PPI network evolution under successive genome duplications. This demonstrates, from first principles, that evolutionary conservation and scale-free topology are intrinsically linked properties of PPI networks and emerge from i prevailing exponential network dynamics under duplication and ii asymmetric divergence of gene duplicates. While required, we argue that this asymmetric divergence arises, in fact, spontaneously at the level of protein-binding sites. This supports a refined model of PPI network evolution in terms of protein domains under exponential and asymmetric duplication/divergence dynamics, with multidomain proteins underlying the combinatorial formation of protein complexes. Genome duplication then provides a powerful source of PPI network innovation by promoting local rearrangements of multidomain proteins on a genome wide scale. Yet, we show that the overall conservation and topology of PPI networks are robust to extensive domain shuffling of multidomain proteins as well as to finer details of protein interaction and evolution. Finally, large scale features of direct and indirect PPI networks of S. cerevisiae are well reproduced numerically with only two adjusted parameters of clear biological significance (i.e. network effective growth rate and average number of protein-binding domains per protein. Conclusion This study demonstrates the statistical consequences of genome duplication and domain shuffling on the conservation and topology of PPI networks over a broad evolutionary scale across eukaryote kingdoms. In particular, scale
Mathematical modelling of unglazed solar collectors under extreme operating conditions
DEFF Research Database (Denmark)
Bunea, M.; Perers, Bengt; Eicher, S.
2015-01-01
average temperature levels at the evaporator. Simulation of these systems requires a collector model that can take into account operation at very low temperatures (below freezing) and under various weather conditions, particularly operation without solar irradiation.A solar collector mathematical model......Combined heat pumps and solar collectors got a renewed interest on the heating system market worldwide. Connected to the heat pump evaporator, unglazed solar collectors can considerably increase their efficiency, but they also raise the coefficient of performance of the heat pump with higher...... was found due to the condensation phenomenon and up to 40% due to frost under no solar irradiation. This work also points out the influence of the operating conditions on the collector's characteristics.Based on experiments carried out at a test facility, every heat flux on the absorber was separately...
Grain breakage under uniaxial compression, through 3D DEM modelling
Nader, François; Silvani, Claire; Djeran-Maigre, Irini
2017-06-01
A breakable grain model is presented, using the concept of particles assembly. Grains of polyhedral shapes are generated, formed by joining together tetrahedral subgrains using cohesive bonds. Single grain crushing simulations are performed for multiple values of the intra-granular cohesion to study the effect on the grain's strength. The same effect of intra-granular cohesion is studied under oedometric compression on samples of around 800 grains, which allows the evaluation of grain breakage model on the macroscopic behaviour. Grain size distribution curves and grain breakage ratios are monitored throughout the simulations.
Grain breakage under uniaxial compression, through 3D DEM modelling
Directory of Open Access Journals (Sweden)
Nader François
2017-01-01
Full Text Available A breakable grain model is presented, using the concept of particles assembly. Grains of polyhedral shapes are generated, formed by joining together tetrahedral subgrains using cohesive bonds. Single grain crushing simulations are performed for multiple values of the intra-granular cohesion to study the effect on the grain’s strength. The same effect of intra-granular cohesion is studied under oedometric compression on samples of around 800 grains, which allows the evaluation of grain breakage model on the macroscopic behaviour. Grain size distribution curves and grain breakage ratios are monitored throughout the simulations.
A sliding mode observer for hemodynamic characterization under modeling uncertainties
Zayane, Chadia
2014-06-01
This paper addresses the case of physiological states reconstruction in a small region of the brain under modeling uncertainties. The misunderstood coupling between the cerebral blood volume and the oxygen extraction fraction has lead to a partial knowledge of the so-called balloon model describing the hemodynamic behavior of the brain. To overcome this difficulty, a High Order Sliding Mode observer is applied to the balloon system, where the unknown coupling is considered as an internal perturbation. The effectiveness of the proposed method is illustrated through a set of synthetic data that mimic fMRI experiments.
Model analyses for sustainable energy supply under CO2 restrictions
International Nuclear Information System (INIS)
Matsuhashi, Ryuji; Ishitani, Hisashi.
1995-01-01
This paper aims at clarifying key points for realizing sustainable energy supply under restrictions on CO 2 emissions. For this purpose, possibility of solar breeding system is investigated as a key technology for the sustainable energy supply. The authors describe their mathematical model simulating global energy supply and demand in ultra-long term. Depletion of non-renewable resources and constraints on CO 2 emissions are taken into consideration in the model. Computed results have shown that present energy system based on non-renewable resources shifts to a system based on renewable resources in the ultra-long term with appropriate incentives
Climate Change: Implications for the Assumptions, Goals and Methods of Urban Environmental Planning
Directory of Open Access Journals (Sweden)
Kristina Hill
2016-12-01
-based quantitative models of regional system behavior that may soon be used to determine acceptable land uses. Finally, the philosophical assumptions that underlie urban environmental planning are changing to address new epistemological, ontological and ethical assumptions that support new methods and goals. The inability to use the past as a guide to the future, new prioritizations of values for adaptation, and renewed efforts to focus on intergenerational justice are provided as examples. In order to represent a genuine paradigm shift, this review argues that changes must begin to be evident across the underlying assumptions, conceptual frameworks, and methods of urban environmental planning, and be attributable to the same root cause. The examples presented here represent the early stages of a change in the overall paradigm of the discipline.
Challenging the assumptions for thermal sensation scales
DEFF Research Database (Denmark)
Schweiker, Marcel; Fuchs, Xaver; Becker, Susanne
2016-01-01
Scales are widely used to assess the personal experience of thermal conditions in built environments. Most commonly, thermal sensation is assessed, mainly to determine whether a particular thermal condition is comfortable for individuals. A seven-point thermal sensation scale has been used...... extensively, which is suitable for describing a one-dimensional relationship between physical parameters of indoor environments and subjective thermal sensation. However, human thermal comfort is not merely a physiological but also a psychological phenomenon. Thus, it should be investigated how scales for its...... assessment could benefit from a multidimensional conceptualization. The common assumptions related to the usage of thermal sensation scales are challenged, empirically supported by two analyses. These analyses show that the relationship between temperature and subjective thermal sensation is non...
Lázár, Attila N; Clarke, Derek; Adams, Helen; Akanda, Abdur Razzaque; Szabo, Sylvia; Nicholls, Robert J; Matthews, Zoe; Begum, Dilruba; Saleh, Abul Fazal M; Abedin, Md Anwarul; Payo, Andres; Streatfield, Peter Kim; Hutton, Craig; Mondal, M Shahjahan; Moslehuddin, Abu Zofar Md
2015-06-01
Coastal Bangladesh experiences significant poverty and hazards today and is highly vulnerable to climate and environmental change over the coming decades. Coastal stakeholders are demanding information to assist in the decision making processes, including simulation models to explore how different interventions, under different plausible future socio-economic and environmental scenarios, could alleviate environmental risks and promote development. Many existing simulation models neglect the complex interdependencies between the socio-economic and environmental system of coastal Bangladesh. Here an integrated approach has been proposed to develop a simulation model to support agriculture and poverty-based analysis and decision-making in coastal Bangladesh. In particular, we show how a simulation model of farmer's livelihoods at the household level can be achieved. An extended version of the FAO's CROPWAT agriculture model has been integrated with a downscaled regional demography model to simulate net agriculture profit. This is used together with a household income-expenses balance and a loans logical tree to simulate the evolution of food security indicators and poverty levels. Modelling identifies salinity and temperature stress as limiting factors to crop productivity and fertilisation due to atmospheric carbon dioxide concentrations as a reinforcing factor. The crop simulation results compare well with expected outcomes but also reveal some unexpected behaviours. For example, under current model assumptions, temperature is more important than salinity for crop production. The agriculture-based livelihood and poverty simulations highlight the critical significance of debt through informal and formal loans set at such levels as to persistently undermine the well-being of agriculture-dependent households. Simulations also indicate that progressive approaches to agriculture (i.e. diversification) might not provide the clear economic benefit from the perspective of
Electricity pricing model in thermal generating stations under deregulation
International Nuclear Information System (INIS)
Reji, P.; Ashok, S.; Moideenkutty, K.M.
2007-01-01
In regulated public utilities with competitive power markets, deregulation has replaced the monopoly. Under the deregulated power market, the electricity price primarily depends on market mechanism and power demand. In this market, generators generally follow marginal pricing. Each generator fixes the electricity price based on their pricing strategy and it leads to more price volatility. This paper proposed a model to determine the electricity price considering all operational constraints of the plant and economic variables that influenced the price, for a thermal generating station under deregulation. The purpose of the model was to assist existing stations, investors in the power sector, regulatory authorities, transmission utilities, and new power generators in decision-making. The model could accommodate price volatility in the market and was based on performance incentive/penalty considering plant load factor, availability of the plant and peak/ off peak demand. The model was applied as a case study to a typical thermal utility in India to determine the electricity price. It was concluded that the case study of a thermal generating station in a deregulated environment showed that the electricity price mainly depended on the gross calorific value (GCV) of fuel, mode of operation, price of the fuel, and operating charges. 11 refs., 2 tabs., 1 fig
Lamon, Lara; Von Waldow, Harald; Macleod, Matthew; Scheringer, Martin; Marcomini, Antonio; Hungerbühler, Konrad
2009-08-01
We used the multimedia chemical fate model BETR Global to evaluate changes in the global distribution of two polychlorinated biphenyls, PCB 28 and PCB 153, under the influence of climate change. This was achieved by defining two climate scenarios based on results from a general circulation model, one scenario representing the last twenty years of the 20th century (20CE scenario) and another representing the global climate under the assumption of strong future greenhouse gas emissions (A2 scenario). The two climate scenarios are defined by four groups of environmental parameters: (1) temperature in the planetary boundary layer and the free atmosphere, (2) wind speeds and directions in the atmosphere, (3) current velocities and directions in the surface mixed layer of the oceans, and (4) rate and geographical pattern of precipitation. As a fifth parameter in our scenarios, we considerthe effect of temperature on primary volatilization emissions of PCBs. Comparison of dynamic model results using environmental parameters from the 20CE scenario against historical long-term monitoring data of concentrations of PCB 28 and PCB 153 in air from 16 different sites shows satisfactory agreement between modeled and measured PCBs concentrations. The 20CE scenario and A2 scenario were compared using steady-state calculations and assuming the same source characteristics of PCBs. Temperature differences between the two scenarios is the dominant factor that determines the difference in PCB concentrations in air. The higher temperatures in the A2 scenario drive increased primary and secondary volatilization emissions of PCBs, and enhance transport from temperate regions to the Arctic. The largest relative increase in concentrations of both PCB congeners in air under the A2 scenario occurs in the high Arctic and the remote Pacific Ocean. Generally, higher wind speeds under the A2 scenario result in more efficient intercontinental transport of PCB 28 and PCB 153 compared to the 20CE
Modeling Malaria Vector Distribution under Climate Change Scenarios in Kenya
Ngaina, J. N.
2017-12-01
Projecting the distribution of malaria vectors under climate change is essential for planning integrated vector control strategies for sustaining elimination and preventing reintroduction of malaria. However, in Kenya, little knowledge exists on the possible effects of climate change on malaria vectors. Here we assess the potential impact of future climate change on locally dominant Anopheles vectors including Anopheles gambiae, Anopheles arabiensis, Anopheles merus, Anopheles funestus, Anopheles pharoensis and Anopheles nili. Environmental data (Climate, Land cover and elevation) and primary empirical geo-located species-presence data were identified. The principle of maximum entropy (Maxent) was used to model the species' potential distribution area under paleoclimate, current and future climates. The Maxent model was highly accurate with a statistically significant AUC value. Simulation-based estimates suggest that the environmentally suitable area (ESA) for Anopheles gambiae, An. arabiensis, An. funestus and An. pharoensis would increase under all two scenarios for mid-century (2016-2045), but decrease for end century (2071-2100). An increase in ESA of An. Funestus was estimated under medium stabilizing (RCP4.5) and very heavy (RCP8.5) emission scenarios for mid-century. Our findings can be applied in various ways such as the identification of additional localities where Anopheles malaria vectors may already exist, but has not yet been detected and the recognition of localities where it is likely to spread to. Moreover, it will help guide future sampling location decisions, help with the planning of vector control suites nationally and encourage broader research inquiry into vector species niche modeling
Computational modeling for hexcan failure under core distruptive accidental conditions
Energy Technology Data Exchange (ETDEWEB)
Sawada, T.; Ninokata, H.; Shimizu, A. [Tokyo Institute of Technology (Japan)
1995-09-01
This paper describes the development of computational modeling for hexcan wall failures under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH has been analyzed by using the SIMMER-II code. The SIMBATH experiments were performed at KfK in Germany. The experiments used a thermite mixture to simulate fuel. The test geometry of SIMBATH ranged from single pin to 37-pin bundles. In this study, phenomena of hexcan wall failure found in a SIMBATH test were analyzed by SIMMER-II. Although the original model of SIMMER-II did not calculate any hexcan failure, several simple modifications made it possible to reproduce the hexcan wall melt-through observed in the experiment. In this paper the modifications and their significance are discussed for further modeling improvements.
Numerical solution of dynamic equilibrium models under Poisson uncertainty
DEFF Research Database (Denmark)
Posch, Olaf; Trimborn, Timo
2013-01-01
We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations...... of the retarded type. We apply the Waveform Relaxation algorithm, i.e., we provide a guess of the policy function and solve the resulting system of (deterministic) ordinary differential equations by standard techniques. For parametric restrictions, analytical solutions to the stochastic growth model and a novel...... solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households....
Assessment of interfacial heat transfer models under subcooled flow boiling
Energy Technology Data Exchange (ETDEWEB)
Ribeiro, Guilherme B.; Braz Filho, Francisco A., E-mail: gbribeiro@ieav.cta.br, E-mail: fbraz@ieav.cta.br [Instituto de Estudos Avançados (DCTA/IEAv), São José dos Campos, SP (Brazil). Div. de Energia Nuclear
2017-07-01
The present study concerns a detailed analysis of subcooled flow boiling characteristics under high pressure systems using a two-fluid Eulerian approach provided by a Computational Fluid Dynamics (CFD) solver. For this purpose, a vertical heated pipe made of stainless steel with an internal diameter of 15.4 mm was considered as the modeled domain. An uniform heat flux of 570 kW/m2 and saturation pressure of 4.5 MPa were applied to the channel wall, whereas water mass flux of 900 kg/m2s was considered for all simulation cases. The model was validated against a set of experimental data and results have indicated a promising use of CFD technique for the estimation of wall temperature, the liquid bulk temperature and the location of the departure of nucleate boiling. Different sub-models of interfacial heat transfer coefficient were applied and compared, allowing a better prediction of void fraction along the heated channel. (author)
Omnibus tests of the martingale assumption in the analysis of recurrent failure time data.
Jones, C L; Harrington, D P
2001-06-01
The Andersen-Gill multiplicative intensity (MI) model is well-suited to the analysis of recurrent failure time data. The fundamental assumption of the MI model is that the process Mi(t) for subjects i = 1, ..., n, defined to be the difference between a subject's counting process and compensator, i.e., Ni(t) - Ai(t); t > 0, is a martingale with respect to some filtration. We propose omnibus procedures for testing this assumption. The methods are based on transformations of the estimated martingale residual process Mi(t) a function of consistent estimates of the log-intensity ratios and the baseline cumulative hazard. Under a correctly specified model, the expected value of Mi(t) is approximately equal to zero with approximately uncorrelated increments. These properties are exploited in the proposed testing procedures. We examine the effects of censoring and covariate effects on the operating characteristics of the proposed methods via simulation. The procedures are most sensitive to the omission of a time-varying continuous covariate. We illustrate use of the methods in an analysis of data from a clinical trial involving patients with chronic granulatomous disease.
Modeling of Soybean under Present and Future Climates in Mozambique
Directory of Open Access Journals (Sweden)
Manuel António Dina Talacuece
2016-06-01
Full Text Available This study aims to calibrate and validate the generic crop model (CROPGRO-Soybean and estimate the soybean yield, considering simulations with different sowing times for the current period (1990–2013 and future climate scenario (2014–2030. The database used came from observed data, nine climate models of CORDEX (Coordinated Regional climate Downscaling Experiment-Africa framework and MERRA (Modern Era Retrospective-Analysis for Research and Applications reanalysis. The calibration and validation data for the model were acquired in field experiments, carried out in the 2009/2010 and 2010/2011 growing seasons in the experimental area of the International Institute of Tropical Agriculture (IITA in Angónia, Mozambique. The yield of two soybean cultivars: Tgx 1740-2F and Tgx 1908-8F was evaluated in the experiments and modeled for two distinct CO2 concentrations. Our model simulation results indicate that the fertilization effect leads to yield gains for both cultivars, ranging from 11.4% (Tgx 1908-8F to 15% (Tgx 1740-2Fm when compared to the performance of those cultivars under current CO2 atmospheric concentration. Moreover, our results show that MERRA, the RegCM4 (Regional Climatic Model version 4 and CNRM-CM5 (Centre National de Recherches Météorologiques – Climatic Model version 5 models provided more accurate estimates of yield, while others models underestimate yield as compared to observations, a fact that was demonstrated to be related to the model’s capability of reproducing the precipitation and the surface radiation amount.
Outdoor FSO Communications Under Fog: Attenuation Modeling and Performance Evaluation
Esmail, Maged Abdullah
2016-07-18
Fog is considered to be a primary challenge for free space optics (FSO) systems. It may cause attenuation that is up to hundreds of decibels per kilometer. Hence, accurate modeling of fog attenuation will help telecommunication operators to engineer and appropriately manage their networks. In this paper, we examine fog measurement data coming from several locations in Europe and the United States and derive a unified channel attenuation model. Compared with existing attenuation models, our proposed model achieves a minimum of 9 dB, which is lower than the average root-mean-square error (RMSE). Moreover, we have investigated the statistical behavior of the channel and developed a probabilistic model under stochastic fog conditions. Furthermore, we studied the performance of the FSO system addressing various performance metrics, including signal-to-noise ratio (SNR), bit-error rate (BER), and channel capacity. Our results show that in communication environments with frequent fog, FSO is typically a short-range data transmission technology. Therefore, FSO will have its preferred market segment in future wireless fifth-generation/sixth-generation (5G/6G) networks having cell sizes that are lower than a 1-km diameter. Moreover, the results of our modeling and analysis can be applied in determining the switching/thresholding conditions in highly reliable hybrid FSO/radio-frequency (RF) networks.
Modeling the Propagation of Mobile Phone Virus under Complex Network
Directory of Open Access Journals (Sweden)
Wei Yang
2014-01-01
Full Text Available Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively.
Modeling the propagation of mobile phone virus under complex network.
Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei; Yao, Yu
2014-01-01
Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively.
Modeling the Propagation of Mobile Phone Virus under Complex Network
Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei
2014-01-01
Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209
Simulation modelling of a patient surge in an emergency department under disaster conditions
Directory of Open Access Journals (Sweden)
Muhammet Gul
2015-10-01
Full Text Available The efficiency of emergency departments (EDs in handling patient surges during disaster times using the available resources is very important. Many EDs require additional resources to overcome the bottlenecks in emergency systems. The assumption is that EDs consider the option of temporary staff dispatching, among other options, in order to respond to an increased demand or even the hiring temporarily non-hospital medical staff. Discrete event simulation (DES, a well-known simulation method and based on the idea of process modeling, is used for establishing ED operations and management related models. In this study, a DES model is developed to investigate and analyze an ED under normal conditions and an ED in a disaster scenario which takes into consideration an increased influx of disaster victims-patients. This will allow early preparedness of emergency departments in terms of physical and human resources. The studied ED is located in an earthquake zone in Istanbul. The report on Istanbul’s disaster preparedness presented by the Japan International Cooperation Agency (JICA and Istanbul Metropolitan Municipality (IMM, asserts that the district where the ED is located is estimated to have the highest injury rate. Based on real case study information, the study aims to suggest a model on pre-planning of ED resources for disasters. The results indicate that in times of a possible disaster, when the percentage of red patient arrivals exceeds 20% of total patient arrivals, the number of red area nurses and the available space for red area patients will be insufficient for the department to operate effectively. A methodological improvement presented a different distribution function that was tested for service time of the treatment areas. The conclusion is that the Weibull distribution function used in service process of injection room fits the model better than the Gamma distribution function.
Thermal modelling of PV module performance under high ambient temperatures
Energy Technology Data Exchange (ETDEWEB)
Diarra, D.C.; Harrison, S.J. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering Solar Calorimetry Lab; Akuffo, F.O. [Kwame Nkrumah Univ. of Science and Technology, Kumasi (Ghana). Dept. of Mechanical Engineering
2005-07-01
When predicting the performance of photovoltaic (PV) generators, the actual performance is typically lower than test results conducted under standard test conditions because the radiant energy absorbed in the module under normal operation raises the temperature of the cell and other multilayer components. The increase in temperature translates to a lower conversion efficiency of the solar cells. In order to address these discrepancies, a thermal model of a characteristic PV module was developed to assess and predict its performance under real field-conditions. The PV module consisted of monocrystalline silicon cells in EVA between a glass cover and a tedlar backing sheet. The EES program was used to compute the equilibrium temperature profile in the PV module. It was shown that heat is dissipated towards the bottom and the top of the module, and that its temperature can be much higher than the ambient temperature. Modelling results indicate that 70-75 per cent of the absorbed solar radiation is dissipated from the solar cells as heat, while 4.7 per cent of the solar energy is absorbed in the glass cover and the EVA. It was also shown that the operating temperature of the PV module decreases with increased wind speed. 2 refs.
Modeling Bird Migration under Climate Change: A Mechanistic Approach
Smith, James A.
2009-01-01
How will migrating birds respond to changes in the environment under climate change? What are the implications for migratory success under the various accelerated climate change scenarios as forecast by the Intergovernmental Panel on Climate Change? How will reductions or increased variability in the number or quality of wetland stop-over sites affect migratory bird species? The answers to these questions have important ramifications for conservation biology and wildlife management. Here, we describe the use of continental scale simulation modeling to explore how spatio-temporal changes along migratory flyways affect en-route migration success. We use an individually based, biophysical, mechanistic, bird migration model to simulate the movement of shorebirds in North America as a tool to study how such factors as drought and wetland loss may impact migratory success and modify migration patterns. Our model is driven by remote sensing and climate data and incorporates important landscape variables. The energy budget components of the model include resting, foraging, and flight, but presently predation is ignored. Results/Conclusions We illustrate our model by studying the spring migration of sandpipers through the Great Plains to their Arctic breeding grounds. Why many species of shorebirds have shown significant declines remains a puzzle. Shorebirds are sensitive to stop-over quality and spacing because of their need for frequent refueling stops and their opportunistic feeding patterns. We predict bird "hydrographs that is, stop-over frequency with latitude, that are in agreement with the literature. Mean stop-over durations predicted from our model for nominal cases also are consistent with the limited, but available data. For the shorebird species simulated, our model predicts that shorebirds exhibit significant plasticity and are able to shift their migration patterns in response to changing drought conditions. However, the question remains as to whether this
Regional Climate Variability Under Model Simulations of Solar Geoengineering
Dagon, Katherine; Schrag, Daniel P.
2017-11-01
Solar geoengineering has been shown in modeling studies to successfully mitigate global mean surface temperature changes from greenhouse warming. Changes in land surface hydrology are complicated by the direct effect of carbon dioxide (CO2) on vegetation, which alters the flux of water from the land surface to the atmosphere. Here we investigate changes in boreal summer climate variability under solar geoengineering using multiple ensembles of model simulations. We find that spatially uniform solar geoengineering creates a strong meridional gradient in the Northern Hemisphere temperature response, with less consistent patterns in precipitation, evapotranspiration, and soil moisture. Using regional summertime temperature and precipitation results across 31-member ensembles, we show a decrease in the frequency of heat waves and consecutive dry days under solar geoengineering relative to a high-CO2 world. However in some regions solar geoengineering of this amount does not completely reduce summer heat extremes relative to present day climate. In western Russia and Siberia, an increase in heat waves is connected to a decrease in surface soil moisture that favors persistent high temperatures. Heat waves decrease in the central United States and the Sahel, while the hydrologic response increases terrestrial water storage. Regional changes in soil moisture exhibit trends over time as the model adjusts to solar geoengineering, particularly in Siberia and the Sahel, leading to robust shifts in climate variance. These results suggest potential benefits and complications of large-scale uniform climate intervention schemes.
Modeling non-monotonic properties under propositional argumentation
Wang, Geng; Lin, Zuoquan
2013-03-01
In the field of knowledge representation, argumentation is usually considered as an abstract framework for nonclassical logic. In this paper, however, we'd like to present a propositional argumentation framework, which can be used to closer simulate a real-world argumentation. We thereby argue that under a dialectical argumentation game, we can allow non-monotonic reasoning even under classical logic. We introduce two methods together for gaining nonmonotonicity, one by giving plausibility for arguments, the other by adding "exceptions" which is similar to defaults. Furthermore, we will give out an alternative definition for propositional argumentation using argumentative models, which is highly related to the previous reasoning method, but with a simple algorithm for calculation.
Development of ionospheric data assimilation model under geomagnetic storm conditions
Lin, C. C. H.; Chen, C. H.; Chen, W.; Matsuo, T.
2016-12-01
This study attempts to construct the ionosphere data assimilation model for both quiet and storm time ionosphere. The model assimilates radio occultation and ground-based GNSS observations of global ionosphere using an Ensemble Kalman Filter (EnKF) software of Data Assimilation Research Testbed (DART) together with the theoretical thermosphere-ionosphere-electrodynamic general circulation model (TIEGCM), developed by National Center for Atmospheric Research (NCAR). Using DART-TIEGCM, we investigate the effects of rapid assimilation-forecast cycling for the 26 September 2011 geomagnetic storm period. Effects of various assimilation-forecast cycles, 60-, 30-, and 10-minutes, on the ionospheric forecast are examined by using the global root-mean-square of observation-minus-forecast (OmF) TEC residuals during the entire storm period. Examinations show that the 10-minutes assimilation cycle could greatly improve the quality of model forecast under the storm conditions. Additionally, examinations of storm-time forecast quality for different high latitude forcing given by Heelis and Weimer empirical models are also performed.
Nonlinear modeling of magnetorheological energy absorbers under impact conditions
Mao, Min; Hu, Wei; Choi, Young-Tai; Wereley, Norman M.; Browne, Alan L.; Ulicny, John; Johnson, Nancy
2013-11-01
Magnetorheological energy absorbers (MREAs) provide adaptive vibration and shock mitigation capabilities to accommodate varying payloads, vibration spectra, and shock pulses, as well as other environmental factors. A key performance metric is the dynamic range, which is defined as the ratio of the force at maximum field to the force in the absence of field. The off-state force is typically assumed to increase linearly with speed, but at the higher shaft speeds occurring in impact events, the off-state damping exhibits nonlinear velocity squared damping effects. To improve understanding of MREA behavior under high-speed impact conditions, this study focuses on nonlinear MREA models that can more accurately predict MREA dynamic behavior for nominal impact speeds of up to 6 m s-1. Three models were examined in this study. First, a nonlinear Bingham-plastic (BP) model incorporating Darcy friction and fluid inertia (Unsteady-BP) was formulated where the force is proportional to the velocity. Second, a Bingham-plastic model incorporating minor loss factors and fluid inertia (Unsteady-BPM) to better account for high-speed behavior was formulated. Third, a hydromechanical (HM) analysis was developed to account for fluid compressibility and inertia as well as minor loss factors. These models were validated using drop test data obtained using the drop tower facility at GM R&D Center for nominal drop speeds of up to 6 m s-1.
Nonlinear modeling of magnetorheological energy absorbers under impact conditions
International Nuclear Information System (INIS)
Mao, Min; Hu, Wei; Choi, Young-Tai; Wereley, Norman M; Browne, Alan L; Ulicny, John; Johnson, Nancy
2013-01-01
Magnetorheological energy absorbers (MREAs) provide adaptive vibration and shock mitigation capabilities to accommodate varying payloads, vibration spectra, and shock pulses, as well as other environmental factors. A key performance metric is the dynamic range, which is defined as the ratio of the force at maximum field to the force in the absence of field. The off-state force is typically assumed to increase linearly with speed, but at the higher shaft speeds occurring in impact events, the off-state damping exhibits nonlinear velocity squared damping effects. To improve understanding of MREA behavior under high-speed impact conditions, this study focuses on nonlinear MREA models that can more accurately predict MREA dynamic behavior for nominal impact speeds of up to 6 m s −1 . Three models were examined in this study. First, a nonlinear Bingham-plastic (BP) model incorporating Darcy friction and fluid inertia (Unsteady-BP) was formulated where the force is proportional to the velocity. Second, a Bingham-plastic model incorporating minor loss factors and fluid inertia (Unsteady-BPM) to better account for high-speed behavior was formulated. Third, a hydromechanical (HM) analysis was developed to account for fluid compressibility and inertia as well as minor loss factors. These models were validated using drop test data obtained using the drop tower facility at GM R and D Center for nominal drop speeds of up to 6 m s −1 . (paper)
Transsexual parenthood and new role assumptions.
Faccio, Elena; Bordin, Elena; Cipolletta, Sabrina
2013-01-01
This study explores the parental role of transsexuals and compares this to common assumptions about transsexuality and parentage. We conducted semi-structured interviews with 14 male-to-female transsexuals and 14 men, half parents and half non-parents, in order to explore four thematic areas: self-representation of the parental role, the description of the transsexual as a parent, the common representations of transsexuals as a parent, and male and female parental stereotypes. We conducted thematic and lexical analyses of the interviews using Taltac2 software. The results indicate that social representations of transsexuality and parenthood have a strong influence on processes of self-representation. Transsexual parents accurately understood conventional male and female parental prototypes and saw themselves as competent, responsible parents. They constructed their role based on affection toward the child rather than on the complementary role of their wives. In contrast, men's descriptions of transsexual parental roles were simpler and the descriptions of their parental role coincided with their personal experiences. These results suggest that the transsexual journey toward parenthood involves a high degree of re-adjustment, because their parental role does not coincide with a conventional one.
Towards New Probabilistic Assumptions in Business Intelligence
Directory of Open Access Journals (Sweden)
Schumann Andrew
2015-01-01
Full Text Available One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot be observable and additive in principle. These variables can be called symbolic values or symbolic meanings and studied within symbolic interactionism, the theory developed since George Herbert Mead and Herbert Blumer. In statistical and econometric tools of business intelligence we accept only phenomena with causal connections measured by additive measures. In the paper we show that in the social world we deal with symbolic interactions which can be studied by non-additive labels (symbolic meanings or symbolic values. For accepting the variety of such phenomena we should avoid additivity of basic labels and construct a new probabilistic method in business intelligence based on non-Archimedean probabilities.
Replenishment policy for an inventory model under inflation
Singh, Vikramjeet; Saxena, Seema; Singh, Pushpinder; Mishra, Nitin Kumar
2017-07-01
The purpose of replenishment is to keep the flow of inventory in the system. To determine an optimal replenishment policy is a great challenge in developing an inventory model. Inflation is defined as the rate at which the prices of goods and services are rising over a time period. The cost parameters are affected by the rate of inflation. High rate of inflation affects the organizations financial conditions. Based on the above backdrop the present paper proposes the retailers replenishment policy for deteriorating items with different cycle lengths under inflation. The shortages are partially backlogged. At last numerical examples validate the results.
Economic Growth Assumptions in Climate and Energy Policy
Directory of Open Access Journals (Sweden)
Nir Y. Krakauer
2014-03-01
Full Text Available The assumption that the economic growth seen in recent decades will continue has dominated the discussion of future greenhouse gas emissions and the mitigation of and adaptation to climate change. Given that long-term economic growth is uncertain, the impacts of a wide range of growth trajectories should be considered. In particular, slower economic growth would imply that future generations will be relatively less able to invest in emissions controls or adapt to the detrimental impacts of climate change. Taking into consideration the possibility of economic slowdown therefore heightens the urgency of reducing greenhouse gas emissions now by moving to renewable energy sources, even if this incurs short-term economic cost. I quantify this counterintuitive impact of economic growth assumptions on present-day policy decisions in a simple global economy-climate model (Dynamic Integrated model of Climate and the Economy (DICE. In DICE, slow future growth increases the economically optimal present-day carbon tax rate and the utility of taxing carbon emissions, although the magnitude of the increase is sensitive to model parameters, including the rate of social time preference and the elasticity of the marginal utility of consumption. Future scenario development should specifically include low-growth scenarios, and the possibility of low-growth economic trajectories should be taken into account in climate policy analyses.
Modeling the Underlying Dynamics of the Spread of Crime
McMillon, David; Simon, Carl P.; Morenoff, Jeffrey
2014-01-01
The spread of crime is a complex, dynamic process that calls for a systems level approach. Here, we build and analyze a series of dynamical systems models of the spread of crime, imprisonment and recidivism, using only abstract transition parameters. To find the general patterns among these parameters—patterns that are independent of the underlying particulars—we compute analytic expressions for the equilibria and for the tipping points between high-crime and low-crime equilibria in these models. We use these expressions to examine, in particular, the effects of longer prison terms and of increased incarceration rates on the prevalence of crime, with a follow-up analysis on the effects of a Three-Strike Policy. PMID:24694545
Instrumental variables estimation under a structural Cox model
DEFF Research Database (Denmark)
Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn
2017-01-01
Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heuristic...... and instruments. We propose a novel class of estimators and derive their asymptotic properties. The methodology is illustrated using two real data applications, and using simulated data....... approaches. More rigorous proposals have either sidestepped the Cox model, or considered it within a restrictive context with dichotomous exposure and instrument, amongst other limitations. The aim of this article is to reconsider IV estimation under a structural Cox model, allowing for arbitrary exposure...
Dynamic malware containment under an epidemic model with alert
Zhang, Tianrui; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan
2017-03-01
Alerting at the early stage of malware invasion turns out to be an important complement to malware detection and elimination. This paper addresses the issue of how to dynamically contain the prevalence of malware at a lower cost, provided alerting is feasible. A controlled epidemic model with alert is established, and an optimal control problem based on the epidemic model is formulated. The optimality system for the optimal control problem is derived. The structure of an optimal control for the proposed optimal control problem is characterized under some conditions. Numerical examples show that the cost-efficiency of an optimal control strategy can be enhanced by adjusting the upper and lower bounds on admissible controls.
Modelling of Performance of Caisson Type Breakwaters under Extreme Waves
Güney Doǧan, Gözde; Özyurt Tarakcıoǧlu, Gülizar; Baykal, Cüneyt
2016-04-01
Many coastal structures are designed without considering loads of tsunami-like waves or long waves although they are constructed in areas prone to encounter these waves. Performance of caisson type breakwaters under extreme swells is tested in Middle East Technical University (METU) Coastal and Ocean Engineering Laboratory. This paper presents the comparison of pressure measurements taken along the surface of caisson type breakwaters and obtained from numerical modelling of them using IH2VOF as well as damage behavior of the breakwater under the same extreme swells tested in a wave flume at METU. Experiments are conducted in the 1.5 m wide wave flume, which is divided into two parallel sections (0.74 m wide each). A piston type of wave maker is used to generate the long wave conditions located at one end of the wave basin. Water depth is determined as 0.4m and kept constant during the experiments. A caisson type breakwater is constructed to one side of the divided flume. The model scale, based on the Froude similitude law, is chosen as 1:50. 7 different wave conditions are applied in the tests as the wave period ranging from 14.6 s to 34.7 s, wave heights from 3.5 m to 7.5 m and steepness from 0.002 to 0.015 in prototype scale. The design wave parameters for the breakwater were 5m wave height and 9.5s wave period in prototype. To determine the damage of the breakwater which were designed according to this wave but tested under swell waves, video and photo analysis as well as breakwater profile measurements before and after each test are performed. Further investigations are carried out about the acting wave forces on the concrete blocks of the caisson structures via pressure measurements on the surfaces of these structures where the structures are fixed to the channel bottom minimizing. Finally, these pressure measurements will be compared with the results obtained from the numerical study using IH2VOF which is one of the RANS models that can be applied to simulate
Numerical modeling of tokamak breakdown phase driven by pure Ohmic heating under ideal conditions
Jiang, Wei; Peng, Yanli; Zhang, Ya; Lapenta, Giovanni
2016-12-01
We have simulated tokamak breakdown phase driven by pure Ohmic heating with implicit particle in cell/Monte Carlo collision (PIC/MCC) method. We have found two modes can be differentiated. When performing breakdown at low initial gas pressure, we find that it works at lower density and current, but higher temperature, and requires lower heating power, compared to when having a high initial pressure. Further, two stages can be distinguished during the avalanche process. One is the fast avalanche stage, in which the plasma is heated by induced toroidal electric field. The other is the slow avalanche stage, which begins when the plasma density reaches 1015 m-3. It has been shown that ions are mainly heated by ambipolar field and become stochastic in the velocity distribution. However, when the induced electric field is low, there exists a transition phase between the two stages. Our model simulates the breakdown and early hydrogen burn-through under ideal conditions during tokamak start-up. It adopted fewer assumptions, and can give an idealized range of operative parameters for Ohmic start-up. Qualitatively, the results agree well with certain experimental observations.
Model tracking dual stochastic controller design under irregular internal noises
International Nuclear Information System (INIS)
Lee, Jong Bok; Heo, Hoon; Cho, Yun Hyun; Ji, Tae Young
2006-01-01
Although many methods about the control of irregular external noise have been introduced and implemented, it is still necessary to design a controller that will be more effective and efficient methods to exclude for various noises. Accumulation of errors due to model tracking, internal noises (thermal noise, shot noise and l/f noise) that come from elements such as resistor, diode and transistor etc. in the circuit system and numerical errors due to digital process often destabilize the system and reduce the system performance. New stochastic controller is adopted to remove those noises using conventional controller simultaneously. Design method of a model tracking dual controller is proposed to improve the stability of system while removing external and internal noises. In the study, design process of the model tracking dual stochastic controller is introduced that improves system performance and guarantees robustness under irregular internal noises which can be created internally. The model tracking dual stochastic controller utilizing F-P-K stochastic control technique developed earlier is implemented to reveal its performance via simulation
Modeling of thermal explosion under pressure in metal ceramic systems
International Nuclear Information System (INIS)
Shapiro, M.; Dudko, V.; Skachek, B.; Matvienko, A.; Gotman, I.; Gutmanas, E.Y.
1998-01-01
The process of reactive in situ synthesis of dense ceramic matrix composites in Ti-B-C, Ti-B-N, Ti-Si-N systems is modeled. These ceramics are fabricated on the basis of compacted blends of ceramic powders, namely Ti-B 4 C and/or Ti-BN. The objectives of the project are to identify and investigate the optimal thermal conditions preferable for production of fully dense ceramic matrix composites. Towards this goal heat transfer and combustion in dense and porous ceramic blends are investigated during monotonous heating at a constant rate. This process is modeled using a heat transfer-combustion model with kinetic parameters determined from the differential thermal analysis of the experimental data. The kinetic burning parameters and the model developed are further used to describe the thermal explosion synthesis in a restrained die under pressure. It is shown that heat removal from the reaction zone affects the combustion process and the final phase composition
Selection of Representative Models for Decision Analysis Under Uncertainty
Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.
2016-03-01
The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.
A quasilinear model for solute transport under unsaturated flow
International Nuclear Information System (INIS)
Houseworth, J.E.; Leem, J.
2009-01-01
We developed an analytical solution for solute transport under steady-state, two-dimensional, unsaturated flow and transport conditions for the investigation of high-level radioactive waste disposal. The two-dimensional, unsaturated flow problem is treated using the quasilinear flow method for a system with homogeneous material properties. Dispersion is modeled as isotropic and is proportional to the effective hydraulic conductivity. This leads to a quasilinear form for the transport problem in terms of a scalar potential that is analogous to the Kirchhoff potential for quasilinear flow. The solutions for both flow and transport scalar potentials take the form of Fourier series. The particular solution given here is for two sources of flow, with one source containing a dissolved solute. The solution method may easily be extended, however, for any combination of flow and solute sources under steady-state conditions. The analytical results for multidimensional solute transport problems, which previously could only be solved numerically, also offer an additional way to benchmark numerical solutions. An analytical solution for two-dimensional, steady-state solute transport under unsaturated flow conditions is presented. A specific case with two sources is solved but may be generalized to any combination of sources. The analytical results complement numerical solutions, which were previously required to solve this class of problems.
The role of uncertainty in supply chains under dynamic modeling
Directory of Open Access Journals (Sweden)
M. Fera
2017-01-01
Full Text Available The uncertainty in the supply chains (SCs for manufacturing and services firms is going to be, over the coming decades, more important for the companies that are called to compete in a new globalized economy. Risky situations for manufacturing are considered in trying to individuate the optimal positioning of the order penetration point (OPP. It aims at defining the best level of information of the client’s order going back through the several supply chain (SC phases, i.e. engineering, procurement, production and distribution. This work aims at defining a system dynamics model to assess competitiveness coming from the positioning of the order in different SC locations. A Taguchi analysis has been implemented to create a decision map for identifying possible strategic decisions under different scenarios and with alternatives for order location in the SC levels. Centralized and decentralized strategies for SC integration are discussed. In the model proposed, the location of OPP is influenced by the demand variation, production time, stock-outs and stock amount. Results of this research are as follows: (i customer-oriented strategies are preferable under high volatility of demand, (ii production-focused strategies are suggested when the probability of stock-outs is high, (iii no specific location is preferable if a centralized control architecture is implemented, (iv centralization requires cooperation among partners to achieve the SC optimum point, (v the producer must not prefer the OPP location at the Retailer level when the general strategy is focused on a decentralized approach.
Key management and encryption under the bounded storage model.
Energy Technology Data Exchange (ETDEWEB)
Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.; Anderson, William Erik
2005-11-01
There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channel using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.
Kriegler, E.; Edmonds, J.; Hallegatte, S.; Ebi, K.L.; Kram, T.; Riahi, K.; Winkler, J.; van Vuuren, Detlef
2014-01-01
The new scenario framework facilitates the coupling of multiple socioeconomic reference pathways with climate model products using the representative concentration pathways. This will allow for improved assessment of climate impacts, adaptation and mitigation. Assumptions about climate policy play a
Local conservation scores without a priori assumptions on neutral substitution rates.
Dingel, Janis; Hanus, Pavol; Leonardi, Niccolò; Hagenauer, Joachim; Zech, Jürgen; Mueller, Jakob C
2008-04-11
Comparative genomics aims to detect signals of evolutionary conservation as an indicator of functional constraint. Surprisingly, results of the ENCODE project revealed that about half of the experimentally verified functional elements found in non-coding DNA were classified as unconstrained by computational predictions. Following this observation, it has been hypothesized that this may be partly explained by biased estimates on neutral evolutionary rates used by existing sequence conservation metrics. All methods we are aware of rely on a comparison with the neutral rate and conservation is estimated by measuring the deviation of a particular genomic region from this rate. Consequently, it is a reasonable assumption that inaccurate neutral rate estimates may lead to biased conservation and constraint estimates. We propose a conservation signal that is produced by local Maximum Likelihood estimation of evolutionary parameters using an optimized sliding window and present a Kullback-Leibler projection that allows multiple different estimated parameters to be transformed into a conservation measure. This conservation measure does not rely on assumptions about neutral evolutionary substitution rates and little a priori assumptions on the properties of the conserved regions are imposed. We show the accuracy of our approach (KuLCons) on synthetic data and compare it to the scores generated by state-of-the-art methods (phastCons, GERP, SCONE) in an ENCODE region. We find that KuLCons is most often in agreement with the conservation/constraint signatures detected by GERP and SCONE while qualitatively very different patterns from phastCons are observed. Opposed to standard methods KuLCons can be extended to more complex evolutionary models, e.g. taking insertion and deletion events into account and corresponding results show that scores obtained under this model can diverge significantly from scores using the simpler model. Our results suggest that discriminating among the
Local conservation scores without a priori assumptions on neutral substitution rates
Directory of Open Access Journals (Sweden)
Hagenauer Joachim
2008-04-01
Full Text Available Abstract Background Comparative genomics aims to detect signals of evolutionary conservation as an indicator of functional constraint. Surprisingly, results of the ENCODE project revealed that about half of the experimentally verified functional elements found in non-coding DNA were classified as unconstrained by computational predictions. Following this observation, it has been hypothesized that this may be partly explained by biased estimates on neutral evolutionary rates used by existing sequence conservation metrics. All methods we are aware of rely on a comparison with the neutral rate and conservation is estimated by measuring the deviation of a particular genomic region from this rate. Consequently, it is a reasonable assumption that inaccurate neutral rate estimates may lead to biased conservation and constraint estimates. Results We propose a conservation signal that is produced by local Maximum Likelihood estimation of evolutionary parameters using an optimized sliding window and present a Kullback-Leibler projection that allows multiple different estimated parameters to be transformed into a conservation measure. This conservation measure does not rely on assumptions about neutral evolutionary substitution rates and little a priori assumptions on the properties of the conserved regions are imposed. We show the accuracy of our approach (KuLCons on synthetic data and compare it to the scores generated by state-of-the-art methods (phastCons, GERP, SCONE in an ENCODE region. We find that KuLCons is most often in agreement with the conservation/constraint signatures detected by GERP and SCONE while qualitatively very different patterns from phastCons are observed. Opposed to standard methods KuLCons can be extended to more complex evolutionary models, e.g. taking insertion and deletion events into account and corresponding results show that scores obtained under this model can diverge significantly from scores using the simpler model
Ernst, Anja F.
2017-01-01
Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971
Modeling the vulnerability of hydroelectricity generation under drought scenarios
Yan, E.; Tidwell, V. C.; Bizjack, M.; Espinoza, V.; Jared, A.
2015-12-01
Hydroelectricity generation highly relies on in-stream and reservoir water availability. The western US has recently experienced increasingly sever, frequent, and prolonged droughts resulting in significant water availability issues. A large number of hydropower plants in Western Electricity Coordinating Council (WECC) are located in California River Basin and Pacific Northwest River Basin. In supporting the WECC's long-term transmission planning, a drought impact analysis was performed with a series of data and modeling tools. This presentation will demonstrate a case study for California River Basin, which has recently experienced one of the worst droughts in its history. The purpose of this study is to evaluate potential risk for hydroelectricity generation due to projected drought scenarios in the medium-term (through the year of 2030). On the basis of historical droughts and the projected drought year for 2020-2030, three drought scenarios were identified. The hydrologic model was constructed and calibrated to simulate evapotranspiration, streamflow, soil moisture, irrigation as well as reservoir storage and discharge based on various dam operation rules and targets under three drought scenarios. The model also incorporates the projected future water demand in 2030 (e.g. municipal, agricultural, electricity generation). The projected monthly reservoir discharges were used to predict the monthly hydropower generation for hydropower plants with a capacity greater than 50 MW in California River Basin for each drought scenario. The results from this study identify spatial distribution of vulnerable hydropower plants and watersheds as well as the level of potential reduction of electricity generation under various drought scenarios and provide valuable insights into future mitigation strategies and long-term planning.
Challenges in Species Tree Estimation Under the Multispecies Coalescent Model.
Xu, Bo; Yang, Ziheng
2016-12-01
The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the
Sustainable infrastructure system modeling under uncertainties and dynamics
Huang, Yongxi
Infrastructure systems support human activities in transportation, communication, water use, and energy supply. The dissertation research focuses on critical transportation infrastructure and renewable energy infrastructure systems. The goal of the research efforts is to improve the sustainability of the infrastructure systems, with an emphasis on economic viability, system reliability and robustness, and environmental impacts. The research efforts in critical transportation infrastructure concern the development of strategic robust resource allocation strategies in an uncertain decision-making environment, considering both uncertain service availability and accessibility. The study explores the performances of different modeling approaches (i.e., deterministic, stochastic programming, and robust optimization) to reflect various risk preferences. The models are evaluated in a case study of Singapore and results demonstrate that stochastic modeling methods in general offers more robust allocation strategies compared to deterministic approaches in achieving high coverage to critical infrastructures under risks. This general modeling framework can be applied to other emergency service applications, such as, locating medical emergency services. The development of renewable energy infrastructure system development aims to answer the following key research questions: (1) is the renewable energy an economically viable solution? (2) what are the energy distribution and infrastructure system requirements to support such energy supply systems in hedging against potential risks? (3) how does the energy system adapt the dynamics from evolving technology and societal needs in the transition into a renewable energy based society? The study of Renewable Energy System Planning with Risk Management incorporates risk management into its strategic planning of the supply chains. The physical design and operational management are integrated as a whole in seeking mitigations against the
Stochastic reduced order models for inverse problems under uncertainty.
Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D
2015-03-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.
Model of personal consumption under conditions of modern economy
Rakhmatullina, D. K.; Akhmetshina, E. R.; Ignatjeva, O. A.
2017-12-01
In the conditions of the modern economy, in connection with the development of production, the expansion of the market for goods and services, its differentiation, active use of marketing tools in the sphere of sales, changes occur in the system of values and consumer needs. Motives that drive the consumer are transformed, stimulating it to activity. The article presents a model of personal consumption that takes into account modern trends in consumer behavior. The consumer, making a choice, seeks to maximize the overall utility from consumption, physiological and socio-psychological satisfaction, in accordance with his expectations, preferences and conditions of consumption. The system of his preferences is formed under the influence of factors of a different nature. It is also shown that the structure of consumer spending allows us to characterize and predict its further behavior in the market. Based on the proposed model and analysis of current trends in consumer behavior, conclusions and recommendations have been made that can be used by legislative and executive government bodies, business organizations, research centres and other structures to form a methodological and analytical tool for preparing a forecast model of consumption.
Internal modelling under Risk-Based Capital (RBC) framework
Ling, Ang Siew; Hin, Pooi Ah
2015-12-01
Very often the methods for the internal modelling under the Risk-Based Capital framework make use of the data which are in the form of run-off triangle. The present research will instead extract from a group of n customers, the historical data for the sum insured si of the i-th customer together with the amount paid yij and the amount aij reported but not yet paid in the j-th development year for j = 1, 2, 3, 4, 5, 6. We model the future value (yij+1, aij+1) to be dependent on the present year value (yij, aij) and the sum insured si via a conditional distribution which is derived from a multivariate power-normal mixture distribution. For a group of given customers with different original purchase dates, the distribution of the aggregate claims liabilities may be obtained from the proposed model. The prediction interval based on the distribution for the aggregate claim liabilities is found to have good ability of covering the observed aggregate claim liabilities.
Philosophy of Technology Assumptions in Educational Technology Leadership
Webster, Mark David
2017-01-01
A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…
The zero-sum assumption in neutral biodiversity theory
Etienne, R.S.; Alonso, D.; McKane, A.J.
2007-01-01
The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the
Modelling crop yield in Iberia under drought conditions
Ribeiro, Andreia; Páscoa, Patrícia; Russo, Ana; Gouveia, Célia
2017-04-01
The improved assessment of the cereal yield and crop loss under drought conditions are essential to meet the increasing economy demands. The growing frequency and severity of the extreme drought conditions in the Iberian Peninsula (IP) has been likely responsible for negative impacts on agriculture, namely on crop yield losses. Therefore, a continuous monitoring of vegetation activity and a reliable estimation of drought impacts is crucial to contribute for the agricultural drought management and development of suitable information tools. This works aims to assess the influence of drought conditions in agricultural yields over the IP, considering cereal yields from mainly rainfed agriculture for the provinces with higher productivity. The main target is to develop a strategy to model drought risk on agriculture for wheat yield at a province level. In order to achieve this goal a combined assessment was made using a drought indicator (Standardized Precipitation Evapotranspiration Index, SPEI) to evaluate drought conditions together with a widely used vegetation index (Normalized Difference Vegetation Index, NDVI) to monitor vegetation activity. A correlation analysis between detrended wheat yield and SPEI was performed in order to assess the vegetation response to each time scale of drought occurrence and also identify the moment of the vegetative cycle when the crop yields are more vulnerable to drought conditions. The time scales and months of SPEI, together with the months of NDVI, better related with wheat yield were chosen to perform a multivariate regression analysis to simulate crop yield. Model results are satisfactory and highlighted the usefulness of such analysis in the framework of developing a drought risk model for crop yields. In terms of an operational point of view, the results aim to contribute to an improved understanding of crop yield management under dry conditions, particularly adding substantial information on the advantages of combining
International Nuclear Information System (INIS)
Bonten, Luc T.C.; Groenenberg, Jan E.; Meesenburg, Henning; Vries, Wim de
2011-01-01
Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: → Surface complexation models can be well applied in field studies. → Soil chemistry under a forest site is adequately modelled using generic parameters. → The model is easily extended with extra elements within the existing framework. → Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.
Transient modelling of a natural circulation loop under variable pressure
Energy Technology Data Exchange (ETDEWEB)
Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian, E-mail: avianna@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br, E-mail: faccini@ien.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Termo-Hidraulica Experimental
2017-07-01
The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the
Mechanical Modeling of a WIPP Drum Under Pressure
Energy Technology Data Exchange (ETDEWEB)
Smith, Jeffrey A. [Sandia National Laboratories, Albuquerque, NM (United States)
2014-11-25
Mechanical modeling was undertaken to support the Waste Isolation Pilot Plant (WIPP) technical assessment team (TAT) investigating the February 14th 2014 event where there was a radiological release at the WIPP. The initial goal of the modeling was to examine if a mechanical model could inform the team about the event. The intention was to have a model that could test scenarios with respect to the rate of pressurization. It was expected that the deformation and failure (inability of the drum to contain any pressure) would vary according to the pressurization rate. As the work progressed there was also interest in using the mechanical analysis of the drum to investigate what would happen if a drum pressurized when it was located under a standard waste package. Specifically, would the deformation be detectable from camera views within the room. A finite element model of a WIPP 55-gallon drum was developed that used all hex elements. Analyses were conducted using the explicit transient dynamics module of Sierra/SM to explore potential pressurization scenarios of the drum. Theses analysis show similar deformation patterns to documented pressurization tests of drums in the literature. The calculated failure pressures from previous tests documented in the literature vary from as little as 16 psi to 320 psi. In addition, previous testing documented in the literature shows drums bulging but not failing at pressures ranging from 69 to 138 psi. The analyses performed for this study found the drums failing at pressures ranging from 35 psi to 75 psi. When the drums are pressurized quickly (in 0.01 seconds) there is significant deformation to the lid. At lower pressurization rates the deformation of the lid is considerably less, yet the lids will still open from the pressure. The analyses demonstrate the influence of pressurization rate on deformation and opening pressure of the drums. Analyses conducted with a substantial mass on top of the closed drum demonstrate that the
A Stone Resource Assignment Model under the Fuzzy Environment
Directory of Open Access Journals (Sweden)
Liming Yao
2012-01-01
to tackle a stone resource assignment problem with the aim of decreasing dust and waste water emissions. On the upper level, the local government wants to assign a reasonable exploitation amount to each stone plant so as to minimize total emissions and maximize employment and economic profit. On the lower level, stone plants must reasonably assign stone resources to produce different stone products under the exploitation constraint. To deal with inherent uncertainties, the object functions and constraints are defuzzified using a possibility measure. A fuzzy simulation-based improved simulated annealing algorithm (FS-ISA is designed to search for the Pareto optimal solutions. Finally, a case study is presented to demonstrate the practicality and efficiency of the model. Results and a comparison analysis are presented to highlight the performance of the optimization method, which proves to be very efficient compared with other algorithms.
10 CFR 436.14 - Methodological assumptions.
2010-01-01
... discount to present values the future cash flows established in either current or constant dollars... present value using the appropriate present worth factors under paragraph (a) of this section. (g) Each... the beginning of beneficial use with appropriate replacement and salvage values for each of the other...
Regional modeling of SOA formation under consideration of HOMs
Gatzsche, Kathrin; Iinuma, Yoshiteru; Tilgner, Andreas; Berndt, Torsten; Poulain, Laurent; Wolke, Ralf
2017-04-01
Secondary organic aerosol (SOA) is the major burden of the atmospheric organic particulate matter with about 140 - 910 TgC/yr (Hallquist et al., 2009). SOA particles are formed via the oxidation of volatile organic carbons (VOCs), where the volatility of the VOCs is lowered. Therefore, gaseous compounds can either nucleate to form new particles or condense on existing particles. The framework of SOA formation under natural conditions is very complex, because there are a variety of gas-phase precursors, atmospheric degradation pathways and formed oxidation products. Up to now, atmospheric models underpredict the SOA mass. Therefore, improved regional scale model implementations are necessary to achieve a better agreement between model predictions and field measurements. Recently, highly oxidized multifunctional organic compounds (HOMs) were found in the gas phase from laboratory and field studies (Jokinen et al., 2015, Mutzel et al., 2015, Berndt et al., 2016a,b). From box model studies, it is known that HOMs are important for the early aerosol growth, however they are not yet considered in mechanisms applied in regional models. The present study utilizes the state-of-the-art multiscale model system COSMO-MUSCAT (Wolke et al., 2012), which is qualified for process studies in local and regional areas. The established model system was enhanced by a kinetic partitioning approach (Zaveri et al., 2014) for the gas-to-particle transfer of oxidized VOCs. The framework of the partitioning approach and the gas-phase mechanism were tested in a box model and evaluated with chamber studies, before implementing in the 3D model system COSMO-MUSCAT. Moreover, HOMs are implemented in the same way for the regional SOA modeling. 3D simulations were performed with an equilibrium partitioning and diffusion dependent partitioning approach, respectively. The presentation will provide first 3D simulation results including comparisons with field measurements from the TROPOS field site
HYPROLOG: A New Logic Programming Language with Assumptions and Abduction
DEFF Research Database (Denmark)
Christiansen, Henning; Dahl, Veronica
2005-01-01
. The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...... with the grammar notation provided by the underlying Prolog system. An operational semantics is given which complies with standard declarative semantics for the ``pure'' sublanguages, while for the full HYPROLOG language, it must be taken as definition. The implementation is straightforward and seems to provide...... for abduction, the most efficient of known implementations; the price, however, is a limited use of negations. The main difference wrt.\\ previous implementations of abduction is that we avoid any level of metainterpretation by having Prolog execute the deductive steps directly and by treating abducibles (and...
DDH-like Assumptions Based on Extension Rings
DEFF Research Database (Denmark)
Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike
2011-01-01
We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of F_{q} ``in the exponent'' of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring R_f= \\F...... DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...... and that in fact the problems become harder with increasing d and hence form an infinite hierarchy. We show that hardness of VDDH implies CCA-secure encryption, efficient Naor-Reingold style pseudorandom functions, and auxiliary input secure encryption, a strong form of leakage resilience. This can be seen...
DDH-Like Assumptions Based on Extension Rings
DEFF Research Database (Denmark)
Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike
2012-01-01
and security proof but get better security and moreover, the amortized complexity (e.g, computation per encrypted bit) is the same as when using DDH. We also show that d-DDH, just like DDH, is easy in bilinear groups. We therefore suggest a different type of assumption, the d-vector DDH problems (d......We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of $\\mathbb{F}_{q}$ “in the exponent” of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring $R......-VDDH), which are based on f(X) = Xd, but with a twist to avoid problems with reducible polynomials. We show in the generic group model that d-VDDH is hard in bilinear groups and that the problems become harder with increasing d. We show that hardness of d-VDDH implies CCA-secure encryption, efficient Naor...
Modeling dynamic behavior of superconducting maglev systems under external disturbances
Huang, Chen-Guang; Xue, Cun; Yong, Hua-Dong; Zhou, You-He
2017-08-01
For a maglev system, vertical and lateral displacements of the levitation body may simultaneously occur under external disturbances, which often results in changes in the levitation and guidance forces and even causes some serious malfunctions. To fully understand the effect of external disturbances on the levitation performance, in this work, we build a two-dimensional numerical model on the basis of Newton's second law of motion and a mathematical formulation derived from magnetoquasistatic Maxwell's equations together with a nonlinear constitutive relation between the electric field and the current density. By using this model, we present an analysis of dynamic behavior for two typical maglev systems consisting of an infinitely long superconductor and a guideway of different arrangements of infinitely long parallel permanent magnets. The results show that during the vertical movement, the levitation force is closely associated with the flux motion and the moving velocity of the superconductor. After being disturbed at the working position, the superconductor has a disturbance-induced initial velocity and then starts to periodically vibrate in both lateral and vertical directions. Meanwhile, the lateral and vertical vibration centers gradually drift along their vibration directions. The larger the initial velocity, the faster their vibration centers drift. However, the vertical drift of the vertical vibration center seems to be independent of the direction of the initial velocity. In addition, due to the lateral and vertical drifts, the equilibrium position of the superconductor in the maglev systems is not a space point but a continuous range.
FEM modelling of soil behaviour under compressive loads
Ungureanu, N.; Vlăduţ, V.; Biriş, S. Şt
2017-01-01
Artificial compaction is one of the most dangerous forms of degradation of agricultural soil. Recognized as a phenomenon with multiple negative effects in terms of environment and agricultural production, soil compaction is strongly influenced by the size of external load, soil moisture, size and shape of footprint area, soil type and number of passes. Knowledge of soil behavior under compressive loads is important in order to prevent or minimize soil compaction. In this paper were developed, by means of the Finite Element Method, various models of soil behavior during the artificial compaction produced by the wheel of an agricultural trailer. Simulations were performed on two types of soil (cohesive and non-cohesive) with known characteristics. By applying two loads (4.5 kN and 21 kN) in footprints of different sizes, were obtained the models of the distributions of stresses occuring in the two types of soil. Simulation results showed that soil stresses increase with increasing wheel load and vary with soil type.
Integrated Bali Cattle Development Model Under Oil Palm Plantation
Directory of Open Access Journals (Sweden)
Rasali Hakim Matondang
2015-09-01
Full Text Available Bali cattle have several advantages such as high fertility and carcass percentage, easy adaptation to the new environment as well. Bali cattle productivity has not been optimal yet. This is due to one of the limitation of feed resources, decreasing of grazing and agricultural land. The aim of this paper is to describe Bali cattle development integrated with oil palm plantations, which is expected to improve productivity and increase Bali cattle population. This integration model is carried out by raising Bali cattle under oil palm plantation through nucleus estate scheme model or individual farmers estates business. Some of Bali cattle raising systems have been applied in the integration of palm plantation-Bali cattle. One of the intensive systems can increase daily weight gain of 0.8 kg/head, calfcrop of 35% per year and has the potency for industrial development of feed and organic fertilizer. In the semi-intensive system, it can improve the production of oil palm fruit bunches (PFB more than 10%, increase harvested-crop area to 15 ha/farmer and reduce the amount of inorganic fertilizer. The extensive system can produce calfcrop ³70%, improve ³30% of PFB, increase business scale ³13 cows/farmer and reduce weeding costs ³16%. Integrated Bali cattle development may provide positive added value for both, palm oil business and cattle business.
Modelling fracture of aged graphite bricks under radiation and temperature
Directory of Open Access Journals (Sweden)
Atheer Hashim
2017-05-01
Full Text Available The graphite bricks of the UK carbon dioxide gas cooled nuclear reactors are subjected to neutron irradiation and radiolytic oxidation during operation which will affect thermal and mechanical material properties and may lead to structural failure. In this paper, an empirical equation is obtained and used to represent the reduction in the thermal conductivity as a result of temperature and neutron dose. A 2D finite element thermal analysis was carried out using Abaqus to obtain temperature distribution across the graphite brick. Although thermal conductivity could be reduced by up to 75% under certain conditions of dose and temperature, analysis has shown that it has no significant effect on the temperature distribution. It was found that the temperature distribution within the graphite brick is non-radial, different from the steady state temperature distribution used in the previous studies [1,2]. To investigate the significance of this non-radial temperature distribution on the failure of graphite bricks, a subsequent mechanical analysis was also carried out with the nodal temperature information obtained from the thermal analysis. To predict the formation of cracks within the brick and the subsequent propagation, a linear traction–separation cohesive model in conjunction with the extended finite element method (XFEM is used. Compared to the analysis with steady state radial temperature distribution, the crack initiation time for the model with non-radial temperature distribution is delayed by almost one year in service, and the maximum crack length is also shorter by around 20%.
Reichardt, J.; Reichardt, S.; Yang, P.; McGee, T. J.; Bhartia, P. K. (Technical Monitor)
2001-01-01
A retrieval algorithm has been developed for the microphysical analysis of polar stratospheric cloud (PSC) optical data obtained using lidar instrumentation. The parameterization scheme of the PSC microphysical properties allows for coexistence of up to three different particle types with size-dependent shapes. The finite difference time domain (FDTD) method has been used to calculate optical properties of particles with maximum dimensions equal to or less than 2 mu m and with shapes that can be considered more representative of PSCs on the scale of individual crystals than the commonly assumed spheroids. Specifically. these are irregular and hexagonal crystals. Selection of the optical parameters that are input to the inversion algorithm is based on a potential data set such as that gathered by two of the lidars on board the NASA DC-8 during the Stratospheric Aerosol and Gas Experiment 0 p (SAGE) Ozone Loss Validation experiment (SOLVE) campaign in winter 1999/2000: the Airborne Raman Ozone and Temperature Lidar (AROTEL) and the NASA Langley Differential Absorption Lidar (DIAL). The 0 microphysical retrieval algorithm has been applied to study how particle shape assumptions affect the inversion of lidar data measured in leewave PSCs. The model simulations show that under the assumption of spheroidal particle shapes, PSC surface and volume density are systematically smaller than the FDTD-based values by, respectively, approximately 10-30% and approximately 5-23%.
Projecting Wind Energy Potential Under Climate Change with Ensemble of Climate Model Simulations
Jain, A.; Shashikanth, K.; Ghosh, S.; Mukherjee, P. P.
2013-12-01
reanalysis/ observed output. We apply the same for future under RCP scenarios. We observe spatially and temporally varying global change of wind energy density. The underlying assumption is that the regression relationship will also hold good for future. The results highlight the needs to change the design standards of wind mills at different locations, considering climate change and at the same time the requirement of height modifications for existing mills to produce same energy in future.
Model evaluation of denitrification under rapid infiltration basin systems.
Akhavan, Maryam; Imhoff, Paul T; Andres, A Scott; Finsterle, Stefan
2013-09-01
Rapid Infiltration Basin Systems (RIBS) are used for disposing reclaimed wastewater into soil to achieve additional treatment before it recharges groundwater. Effluent from most new sequenced batch reactor wastewater treatment plants is completely nitrified, and denitrification (DNF) is the main reaction for N removal. To characterize effects of complex surface and subsurface flow patterns caused by non-uniform flooding on DNF, a coupled overland flow-vadose zone model is implemented in the multiphase flow and reactive transport simulator TOUGHREACT. DNF is simulated in two representative soils varying the application cycle, hydraulic loading rate, wastewater quality, water table depth, and subsurface heterogeneity. Simulations using the conventional specified flux boundary condition under-predict DNF by as much as 450% in sand and 230% in loamy sand compared to predictions from the coupled overland flow-vadose zone model, indicating that simulating coupled flow is critical for predicting DNF in cases where hydraulic loading rates are not sufficient to spread the wastewater over the whole basin. Smaller ratios of wetting to drying time and larger hydraulic loading rates result in greater water saturations, more anoxic conditions, and faster water transport in the vadose zone, leading to greater DNF. These results in combination with those from different water table depths explain why reported DNF varied with soil type and water table depth in previous field investigations. Across all simulations, cumulative percent DNF varies between 2 and 49%, indicating that NO₃ removal in RIBS may vary widely depending on operational procedures and subsurface conditions. These modeling results improve understanding of DNF in RIBS and suggest operational procedures that may improve NO₃ removal. Copyright © 2013 Elsevier B.V. All rights reserved.
Preliminary Modeling of Accident Tolerant Fuel Concepts under Accident Conditions
Energy Technology Data Exchange (ETDEWEB)
Gamble, Kyle A.; Hales, Jason D.
2016-12-01
The catastrophic events that occurred at the Fukushima-Daiichi nuclear power plant in 2011 have led to widespread interest in research of alternative fuels and claddings that are proposed to be accident tolerant. Thus, the United States Department of Energy through its NEAMS (Nuclear Energy Advanced Modeling and Simulation) program has funded an Accident Tolerant Fuel (ATF) High Impact Problem (HIP). The ATF HIP is funded for a three-year period. The purpose of the HIP is to perform research into two potential accident tolerant concepts and provide an in-depth report to the Advanced Fuels Campaign (AFC) describing the behavior of the concepts, both of which are being considered for inclusion in a lead test assembly scheduled for placement into a commercial reactor in 2022. The initial focus of the HIP is on uranium silicide fuel and iron-chromium-aluminum (FeCrAl) alloy cladding. Utilizing the expertise of three national laboratory participants (INL, LANL, and ANL) a comprehensive mulitscale approach to modeling is being used including atomistic modeling, molecular dynamics, rate theory, phase-field, and fuel performance simulations. In this paper, we present simulations of two proposed accident tolerant fuel systems: U3Si2 fuel with Zircaloy-4 cladding, and UO2 fuel with FeCrAl cladding. The simulations investigate the fuel performance response of the proposed ATF systems under Loss of Coolant and Station Blackout conditions using the BISON code. Sensitivity analyses are completed using Sandia National Laboratories’ DAKOTA software to determine which input parameters (e.g., fuel specific heat) have the greatest influence on the output metrics of interest (e.g., fuel centerline temperature). Early results indicate that each concept has significant advantages as well as areas of concern. Further work is required prior to formulating the proposition report for the Advanced Fuels Campaign.
Directory of Open Access Journals (Sweden)
Rink eHoekstra
2012-05-01
Full Text Available A valid interpretation of most statistical techniques requires that the criteria for one or more assumptions are met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another, more disquieting, explanation would be that violations of assumptions are hardly checked for in the first place. In this article a study is presented on whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. They were asked to analyze the data as they would their own data, for which often used and well-known techniques like the t-procedure, ANOVA and regression were required. It was found that they hardly ever checked for violations of assumptions. Interviews afterwards revealed that mainly lack of knowledge and nonchalance, rather than more rational reasons like being aware of the robustness of a technique or unfamiliarity with an alternative, seem to account for this behavior. These data suggest that merely encouraging people to check for violations of assumptions will not lead them to do so, and that the use of statistics is opportunistic.
DEFF Research Database (Denmark)
Øjelund, Henrik; Sadegh, Payman
2000-01-01
be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....
Public key cryptography from weaker assumptions
DEFF Research Database (Denmark)
Zottarel, Angela
in the case the adversary is granted access to partial information about the secret state of the primitive. To do so, we work in an extension of the standard black-box model, a new framework where possible leakage from the secret state is taken into account. In particular, we give the first construction......This dissertation is focused on the construction of public key cryptographic primitives and on the relative security analysis in a meaningful theoretic model. This work takes two orthogonal directions. In the first part, we study cryptographic constructions preserving their security properties also...... of signature schemes in a very general leakage model known as auxiliary input. We also study how leakage influences the notion of simulation-based security, comparing leakage tolerance to adaptive security in the UC-framework. In the second part of this dissertation, we turn our attention to hardness...
Breakdown of Hydrostatic Assumption in Tidal Channel with Scour Holes
Directory of Open Access Journals (Sweden)
Chunyan Li
2016-10-01
Full Text Available Hydrostatic condition is a common assumption in tidal and subtidal motions in oceans and estuaries.. Theories with this assumption have been largely successful. However, there is no definite criteria separating the hydrostatic from the non-hydrostatic regimes in real applications because real problems often times have multiple scales. With increased refinement of high resolution numerical models encompassing smaller and smaller spatial scales, the need for non-hydrostatic models is increasing. To evaluate the vertical motion over bathymetric changes in tidal channels and assess the validity of the hydrostatic approximation, we conducted observations using a vessel-based acoustic Doppler current profiler (ADCP. Observations were made along a straight channel 18 times over two scour holes of 25 m deep, separated by 330 m, in and out of an otherwise flat 8 m deep tidal pass leading to the Lake Pontchartrain over a time period of 8 hours covering part of the diurnal tidal cycle. Out of the 18 passages over the scour holes, 11 of them showed strong upwelling and downwelling which resulted in the breakdown of hydrostatic condition. The maximum observed vertical velocity was ~ 0.35 m/s, a high value in a tidal channel, and the estimated vertical acceleration reached a high value of 1.76×10-2 m/s2. Analysis demonstrated that the barotropic non-hydrostatic acceleration was dominant. The cause of the non-hydrostatic flow was the that over steep slopes. This demonstrates that in such a system, the bathymetric variation can lead to the breakdown of hydrostatic conditions. Models with hydrostatic restrictions will not be able to correctly capture the dynamics in such a system with significant bathymetric variations particularly during strong tidal currents.
2011-12-29
... Indian country is subject to State criminal jurisdiction under Public Law 280 (18 U.S.C. 1162(a)) to... Collection; Comments Requested; Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian Country ACTION: 60-Day notice of information collection under review. The Department of Justice...
Cooper, Elisa; Greve, Andrea; Henson, Richard N
2017-06-01
Source monitoring paradigms have been used to separate: 1) the probability of recognising an item (Item memory) and 2) the probability of remembering the context in which that item was previously encountered (Source memory), conditional on it being recognised. Multinomial Processing Tree (MPT) models are an effective way to estimate these conditional probabilities. Moreover, MPTs make explicit the assumptions behind different ways to parameterise Item and Source memory. Using data from six independent groups across two different paradigms, we show that one would draw different conclusions about the effects of age, age-related memory problems and hippocampal lesions on Item and Source memory, depending on the use of: 1) standard accuracy calculation vs MPT analysis, and 2) two different MPT models. The MPT results were more consistent than standard accuracy calculations, and furnished additional parameters that can be interpreted in terms of, for example, false recollection or missed encoding. Moreover, a new MPT structure that allowed for separate memory representations (one for item information and one for item-plus-source information; the Source-Item model) fit the data better, and provided a different pattern of significant differences in parameters, than the more conventional MPT structure in which source information is a subset of item information (the Item-Source model). Nonetheless, there is no theory-neutral way of scoring data, and thus proper examination of the assumptions underlying the scoring of source monitoring paradigms is necessary before theoretical conclusions can be drawn. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Measuring sound absorption using local field assumptions
Kuipers, E.R.
2013-01-01
To more effectively apply acoustically absorbing materials, it is desirable to measure angle-dependent sound absorption coefficients, preferably in situ. Existing measurement methods are based on an overall model of the acoustic field in front of the absorber, and are therefore sensitive to
Characteristics and modeling of spruce wood under dynamic compression load
International Nuclear Information System (INIS)
Eisenacher, Germar
2014-01-01
criterion uses linear interpolation of the strength of constrained and unconstrained spruce wood. Thus multiaxial stress states can be considered. The calculation of the crush tests showed the ability of the model to reproduce the basic strength characteristics of spruce wood. The effect of lateral constraint can be reproduced well due to the uncoupled evolution of the yield surface. On the contrary, the strength is overestimated for load under acute angles, which could be prevented using modified yield surfaces. The effects of strain rate and temperature are generally reproduced well but the scaling factors used should be improved. The calculation of a drop test with a test-package equipped with wood-filled impact limiters confi rmed the model's performance and produced feasible results. However, to create a verified impact limiter model further numerical and experimental investigations are necessary. This work makes an important contribution to the numerical stress analysis in the context of safety cases of transport packages.
Modeling a Hybrid Microgrid Using Probabilistic Reconfiguration under System Uncertainties
Directory of Open Access Journals (Sweden)
Hadis Moradi
2017-09-01
Full Text Available A novel method for a day-ahead optimal operation of a hybrid microgrid system including fuel cells, photovoltaic arrays, a microturbine, and battery energy storage in order to fulfill the required load demand is presented in this paper. In the proposed system, the microgrid has access to the main utility grid in order to exchange power when required. Available municipal waste is utilized to produce the hydrogen required for running the fuel cells, and natural gas will be used as the backup source. In the proposed method, an energy scheduling is introduced to optimize the generating unit power outputs for the next day, as well as the power flow with the main grid, in order to minimize the operational costs and produced greenhouse gases emissions. The nature of renewable energies and electric power consumption is both intermittent and unpredictable, and the uncertainty related to the PV array power generation and power consumption has been considered in the next-day energy scheduling. In order to model uncertainties, some scenarios are produced according to Monte Carlo (MC simulations, and microgrid optimal energy scheduling is analyzed under the generated scenarios. In addition, various scenarios created by MC simulations are applied in order to solve unit commitment (UC problems. The microgrid’s day-ahead operation and emission costs are considered as the objective functions, and the particle swarm optimization algorithm is employed to solve the optimization problem. Overall, the proposed model is capable of minimizing the system costs, as well as the unfavorable influence of uncertainties on the microgrid’s profit, by generating different scenarios.
Leakage-Resilient Circuits without Computational Assumptions
DEFF Research Database (Denmark)
Dziembowski, Stefan; Faust, Sebastian
2012-01-01
Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage...... into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied......: (i) in each observation the leakage is bounded, (ii) different parts of the computation leak independently, and (iii) the randomness that is used for certain operations comes from a simple (non-uniform) distribution. In contrast to earlier work on leakage resilient circuit compilers, which relied...
Directory of Open Access Journals (Sweden)
Luiz Fernando Carvalho Leite
2003-08-01
Full Text Available A modelagem de processos biológicos tem por objetivos o planejamento do uso da terra, o estabelecimento de padrões ambientais e as estimativas dos riscos reais e potenciais das atividades agrícolas e ambientais. Diversos modelos têm sido criados nos últimos 25 anos. Century é um modelo mecanístico que analisa em longo prazo a dinâmica da matéria orgânica do solo e de nutrientes no sistema solo-planta em diversos agroecossistemas. O submodelo de matéria orgânica do solo possui os compartimentos ativo (biomassa microbiana e produtos, lento (produtos microbianos e vegetais, fisicamente protegidos ou biologicamente resistentes à decomposição e passivo (quimicamente recalcitrante ou também fisicamente protegido com diferentes taxas de decomposição. Equações de primeira ordem são usadas para modelar todos os compartimentos da matéria orgânica do solo e a temperatura e umidade do solo modificam as taxas de decomposição. A reciclagem do compartimento ativo e a formação do passivo são controladas pelo teor de areia e de argila do solo, respectivamente. Os resíduos vegetais são divididos em compartimentos dependentes dos teores de lignina e nitrogênio. Por meio do modelo, pode-se relacionar matéria orgânica aos níveis de fertilidade e ao manejo atual e futuro, otimizando o entendimento das transformações dos nutrientes em solos de diversos agroecossistemas.The modeling of biological processes has as objectives the planning of land use, setting environmental standards and estimating the actual and potential risks of the agricultural and environmental activities. Several models have been created in the last 25 years. Century is a mechanistic model that analyzes in long-term the dynamics of soil organic matter and of nutrients in soil-plant system in several agroecosystems. The soil organic matter submodel has the active (microbial biomass and products, slow (plant and microbial products that are physically protected or
Experimental assessment of unvalidated assumptions in classical plasticity theory.
Energy Technology Data Exchange (ETDEWEB)
Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.
2009-01-01
This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.
Assumptions of Customer Knowledge Enablement in the Open Innovation Process
Directory of Open Access Journals (Sweden)
Jokubauskienė Raminta
2017-08-01
Full Text Available In the scientific literature, open innovation is one of the most effective means to innovate and gain a competitive advantage. In practice, there is a variety of open innovation activities, but, nevertheless, customers stand as the cornerstone in this area, since the customers’ knowledge is one of the most important sources of new knowledge and ideas. Evaluating the context where are the interactions of open innovation and customer knowledge enablement, it is necessary to take into account the importance of customer knowledge management. Increasingly it is highlighted that customers’ knowledge management facilitates the creation of innovations. However, it should be an examination of other factors that influence the open innovation, and, at the same time, customers’ knowledge management. This article presents a theoretical model, which reveals the assumptions of open innovation process and the impact on the firm’s performance.
Robust nonlinear control of nuclear reactors under model uncertainty
International Nuclear Information System (INIS)
Park, Moon Ghu
1993-02-01
uncertainty. The performance specification in the boundary layer is also proposed. In the boundary layer, a direct adaptive controller is developed which consists of the adaptive proportional-integral-feed forward (PIF) gains. The essence of the controller is to divide the control into four different terms. Namely, the adaptive P-I-F gains and time-optimal controller are used to accomplish the specific control actions by each term. The robustness of the controller is guaranteed by the feedback of the estimated uncertainty and the performance specification given by the adaptation of PIF gains using the second method of Lyapunov. The newly developed control method is applied to the power tracking control of a nuclear reactor and the simulation results show great improvement in tracking performance compared with the conventional control methods. In addition, a constraint-accommodating adaptive control method is developed. The method is based on a dead-best identified plant model and a simple, but mathematically constructive, adaptation rule for the model-based PI feedback gains. The method is particularly devoted to the considerations on the output constraint. The effectiveness of the controller is shown by application of the method to the power tracking control of Korea Multipurpose Research Reactor (KMRR). The simulation results show robustness against modeling uncertainty and excellent performance under unknown deteriorating actuator condition. It is concluded that the nonlinear control methods developed in this thesis and based on the use of a simple uncertainty estimator and adaptation algorithms for feedback and feedforward gains provide not only robustness against modeling uncertainty but also very fast and smooth performance behavior
International Nuclear Information System (INIS)
Pulcini, Gianpaolo
2015-01-01
This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.
Are Gaussian spectra a viable perceptual assumption in color appearance?
Mizokami, Yoko; Webster, Michael A
2012-02-01
Natural illuminant and reflectance spectra can be roughly approximated by a linear model with as few as three basis functions, and this has suggested that the visual system might construct a linear representation of the spectra by estimating the weights of these functions. However, such models do not accommodate nonlinearities in color appearance, such as the Abney effect. Previously, we found that these nonlinearities are qualitatively consistent with a perceptual inference that stimulus spectra are instead roughly Gaussian, with the hue tied to the inferred centroid of the spectrum [J. Vision 6(9), 12 (2006)]. Here, we examined to what extent a Gaussian inference provides a sufficient approximation of natural color signals. Reflectance and illuminant spectra from a wide set of databases were analyzed to test how well the curves could be fit by either a simple Gaussian with three parameters (amplitude, peak wavelength, and standard deviation) versus the first three principal component analysis components of standard linear models. The resulting Gaussian fits were comparable to linear models with the same degrees of freedom, suggesting that the Gaussian model could provide a plausible perceptual assumption about stimulus spectra for a trichromatic visual system. © 2012 Optical Society of America
Modeling of the response under radiation of electronic dosemeters
International Nuclear Information System (INIS)
Menard, S.
2003-01-01
The simulation with with calculation codes the interactions and the transport of primary and secondary radiations in the detectors allows to reduce the number of developed prototypes and the number of experiments under radiation. The simulation makes possible the determination of the response of the instrument for exposure configurations more extended that these ones of references radiations produced in laboratories. The M.C.N.P.X. allows to transport, over the photons, electrons and neutrons, the charged particles heavier than the electrons and to simulate the radiation - matter interactions for a certain number of particles. The present paper aims to present the interest of the use of the M.C.N.P.X. code in the study, research and evaluation phases of the instrumentation necessary to the dosimetry monitoring. To do that the presentation gives the results of the modeling of a prototype of a equivalent tissue proportional counter (C.P.E.T.) and of the C.R.A.M.A.L. ( radiation protection apparatus marketed by the Eurisys Mesures society). (N.C.)
Modeling the Underlying Predicting Factors of Tobacco Smoking among Adolescents.
Jafarabadi, M Asghari; Allahverdipour, H; Bashirian, S; Jannati, A
2012-01-01
With regard to the willing and starting tobacco smoking among young people in Iran. The aim of the study was to model the underlying factors in predicting the behavior of tobacco smoking among employed youth and students in Iran. In this analytical cross-sectional study, based on a random cluster sampling were recruited 850 high school students, employed and unemployed youth age ranged between 14 and 19 yr from Iran. The data of demographic and tobacco smoking related variables were acquired via a self-administered questionnaire. A series of univariate and multivariate logistic regressions were performed respectively for computing un-adjusted and adjusted Odds Ratios utilizing SPSS 17 software. A number of 189 persons (25.6%) were smoker in the study and the mean smoking initiation age was 13.93 (SD= 2.21). In addition, smoker friend, peer persistence, leaving home, and smoking in one and six month ago were obtained as independent predictors of tobacco smoking. The education programs on resistance skills against the persistence of the peers, improvement in health programs by governmental interference and policy should be implemented.
A unifying model of genome evolution under parsimony.
Paten, Benedict; Zerbino, Daniel R; Hickey, Glenn; Haussler, David
2014-06-19
Parsimony and maximum likelihood methods of phylogenetic tree estimation and parsimony methods for genome rearrangements are central to the study of genome evolution yet to date they have largely been pursued in isolation. We present a data structure called a history graph that offers a practical basis for the analysis of genome evolution. It conceptually simplifies the study of parsimonious evolutionary histories by representing both substitutions and double cut and join (DCJ) rearrangements in the presence of duplications. The problem of constructing parsimonious history graphs thus subsumes related maximum parsimony problems in the fields of phylogenetic reconstruction and genome rearrangement. We show that tractable functions can be used to define upper and lower bounds on the minimum number of substitutions and DCJ rearrangements needed to explain any history graph. These bounds become tight for a special type of unambiguous history graph called an ancestral variation graph (AVG), which constrains in its combinatorial structure the number of operations required. We finally demonstrate that for a given history graph G, a finite set of AVGs describe all parsimonious interpretations of G, and this set can be explored with a few sampling moves. This theoretical study describes a model in which the inference of genome rearrangements and phylogeny can be unified under parsimony.
Directory of Open Access Journals (Sweden)
Hakan Bilir
2016-03-01
Full Text Available Yatırım fırsatlarının değerlendirilmesi süreci beklene getiri ve riskin ölçümüne bağlıdır. Finansal Varlıkları Fiyatlama Modeli (CAPM, çok uzun yıllardır modern finans teorisinin temel taşlarından bir tanesini oluşturmaktadır. Model, varlıkların beklenen getirisi ve sistematik riski arasındaki basit doğrusal ilişkiyi ortaya koymaktadır. Model halen, sermaye maliyetinin hesaplanması, portföy yönetiminin performansının ölçülmesi ve yatırımların değerlendirilmesi amacıyla kullanılmaktadır. CAPM’in çekiciliği, riskin ve beklenen getiri ve risk arasındaki ilişkinin ölçümlenmesi konusundaki güçlü tahmin yeteneğinden gelmektedir. Bununla birlikte modelin bu yeteneği 30 yılı aşkın bir süredir akademisyenler ve uygulamacılar tarafından sorgulanmaktadır. Tartışmalar büyük ölçüde ampirik düzeyde gerçekleştirilmektedir. CAPM’in ampirik düzeydeki problemleri, çok sayıda basitleştirilmiş varsayımı içermesi nedeniyle teorik hatalardır. Çok sayıdaki gerçekçi olmayan varsayımlar modeli pratik olarak kullanışsız hale getirmektedir. Model ile ilgili temel eleştiriler ise risksiz faiz oranı, pazar portföyü ve beta katsayı üzerinde yoğunlaşmaktadır.
Salerno, Laura; Rhind, Charlotte; Hibbs, Rebecca; Micali, Nadia; Schmidt, Ulrike; Gowers, Simon; Macdonald, Pamela; Goddard, Elizabeth; Todd, Gillian; Lo Coco, Gianluca; Treasure, Janet
2016-02-01
The cognitive interpersonal model predicts that parental caregiving style will impact on the rate of improvement of anorexia nervosa symptoms. The study aims to examine whether the absolute levels and the relative congruence between mothers' and fathers' care giving styles influenced the rate of change of their children's symptoms of anorexia nervosa over 12 months. Triads (n=54) consisting of patients with anorexia nervosa and both of their parents were included in the study. Caregivers completed the Caregiver Skills scale and the Accommodation and Enabling Scale at intake. Patients completed the Short Evaluation of Eating Disorders at intake and at monthly intervals for one year. Polynomial Hierarchical Linear Modeling was used for the analysis. There is a person/dose dependant relationship between accommodation and patients' outcome, i.e. when both mother and father are highly accommodating outcome is poor, if either is highly accommodating outcome is intermediate and if both parents are low on accommodation outcome is good. Outcome is also good if both parents or mother alone have high levels of carer skills and poor if both have low levels of skills. Including only a sub-sample of an adolescent clinical population; not considering time spent care giving, and reporting patient's self-reported outcome data limits the generalisability of the current findings. Accommodating and enabling behaviours by family members can serve to maintain eating disorder behaviours. However, skilful behaviours particularly by mothers, can aid recovery. Clinical interventions to optimise care giving skills and to reduce accommodation by both parents may be an important addition to treatment for anorexia nervosa. Copyright © 2015 Elsevier B.V. All rights reserved.
7 Mass casualty incidents: a review of triage severity planning assumptions.
Hunt, Paul
2017-12-01
Recent events involving a significant number of casualties have emphasised the importance of appropriate preparation for receiving hospitals, especially Emergency Departments, during the initial response phase of a major incident. Development of a mass casualty resilience and response framework in the Northern Trauma Network included a review of existing planning assumptions in order to ensure effective resource allocation, both in local receiving hospitals and system-wide.Existing planning assumptions regarding categorisation by triage level are generally stated as a ratio for P1:P2:P3 of 25%:25%:50% of the total number of injured survivors. This may significantly over-, or underestimate, the number in each level of severity in the case of a large-scale incident. A pilot literature review was conducted of the available evidence from historical incidents in order to gather data regarding the confirmed number of overall casualties, 'critical' cases, admitted cases, and non-urgent or discharged cases. This data was collated and grouped by mechanism in order to calculate an appropriate severity ratio for each incident type. 12 articles regarding mass casualty incidents from the last two decades were identified covering three main incident types: (1) Mass transportation crash, (2) Building fire, and (3) Bomb and related terrorist attacks and involving a total of 3615 injured casualties. The overall mortality rate was calculated as 12.3%. Table 1 summarises the available patient casualty data from each of the specific incidents reported and calculated proportions of critical ('P1'), admitted ('P2'), and non-urgent or ambulatory cases ('P3'). Despite the heterogeneity of data and range of incident type there is sufficient evidence to suggest that current planning assumptions are incorrect and a more refined model is required. An important finding is the variation in proportion of critical cases depending upon the mechanism. For example, a greater than expected proportion
Legal assumptions for private company claim for additional (supplementary payment
Directory of Open Access Journals (Sweden)
Šogorov Stevan
2011-01-01
Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.
An Integrated Coral Reef Ecosystem Model to Support Resource Management under a Changing Climate.
Weijerman, Mariska; Fulton, Elizabeth A; Kaplan, Isaac C; Gorton, Rebecca; Leemans, Rik; Mooij, Wolf M; Brainard, Russell E
2015-01-01
Millions of people rely on the ecosystem services provided by coral reefs, but sustaining these benefits requires an understanding of how reefs and their biotic communities are affected by local human-induced disturbances and global climate change. Ecosystem-based management that explicitly considers the indirect and cumulative effects of multiple disturbances has been recommended and adopted in policies in many places around the globe. Ecosystem models give insight into complex reef dynamics and their responses to multiple disturbances and are useful tools to support planning and implementation of ecosystem-based management. We adapted the Atlantis Ecosystem Model to incorporate key dynamics for a coral reef ecosystem around Guam in the tropical western Pacific. We used this model to quantify the effects of predicted climate and ocean changes and current levels of current land-based sources of pollution (LBSP) and fishing. We used the following six ecosystem metrics as indicators of ecosystem state, resilience and harvest potential: 1) ratio of calcifying to non-calcifying benthic groups, 2) trophic level of the community, 3) biomass of apex predators, 4) biomass of herbivorous fishes, 5) total biomass of living groups and 6) the end-to-start ratio of exploited fish groups. Simulation tests of the effects of each of the three drivers separately suggest that by mid-century climate change will have the largest overall effect on this suite of ecosystem metrics due to substantial negative effects on coral cover. The effects of fishing were also important, negatively influencing five out of the six metrics. Moreover, LBSP exacerbates this effect for all metrics but not quite as badly as would be expected under additive assumptions, although the magnitude of the effects of LBSP are sensitive to uncertainty associated with primary productivity. Over longer time spans (i.e., 65 year simulations), climate change impacts have a slight positive interaction with other drivers
An Integrated Coral Reef Ecosystem Model to Support Resource Management under a Changing Climate.
Directory of Open Access Journals (Sweden)
Mariska Weijerman
Full Text Available Millions of people rely on the ecosystem services provided by coral reefs, but sustaining these benefits requires an understanding of how reefs and their biotic communities are affected by local human-induced disturbances and global climate change. Ecosystem-based management that explicitly considers the indirect and cumulative effects of multiple disturbances has been recommended and adopted in policies in many places around the globe. Ecosystem models give insight into complex reef dynamics and their responses to multiple disturbances and are useful tools to support planning and implementation of ecosystem-based management. We adapted the Atlantis Ecosystem Model to incorporate key dynamics for a coral reef ecosystem around Guam in the tropical western Pacific. We used this model to quantify the effects of predicted climate and ocean changes and current levels of current land-based sources of pollution (LBSP and fishing. We used the following six ecosystem metrics as indicators of ecosystem state, resilience and harvest potential: 1 ratio of calcifying to non-calcifying benthic groups, 2 trophic level of the community, 3 biomass of apex predators, 4 biomass of herbivorous fishes, 5 total biomass of living groups and 6 the end-to-start ratio of exploited fish groups. Simulation tests of the effects of each of the three drivers separately suggest that by mid-century climate change will have the largest overall effect on this suite of ecosystem metrics due to substantial negative effects on coral cover. The effects of fishing were also important, negatively influencing five out of the six metrics. Moreover, LBSP exacerbates this effect for all metrics but not quite as badly as would be expected under additive assumptions, although the magnitude of the effects of LBSP are sensitive to uncertainty associated with primary productivity. Over longer time spans (i.e., 65 year simulations, climate change impacts have a slight positive interaction with
van Koppen, M.V.; Elffers, H.; Ruiter, S.
2011-01-01
Likelihood surface methods for geographic offender profiling rely on several assumptions regarding the underlying location choice mechanism of an offender. We propose an ex ante test for checking whether a given set of crime locations is compatible with two necessary assumptions: circular symmetry
Development of a Soybean Sowing Model Under Laboratory Conditions
Directory of Open Access Journals (Sweden)
Jan Turan
2017-01-01
Full Text Available Sowing is affected by numerous factors, and thus high‑quality sowing is a very important task for agricultural engineers and managers of profitable agricultural production. The primary purpose of sowing is placing seeds at proper depths and in‑row spacings in well‑prepared soil. Plant population particularly gives prominence to sowing as it directly affects the uniformity of plant growth and development. Soybean planting is especially dependent on the quality of planting for yield formation due to the significant vicinity of seeds. Provided all external factors of high‑quality sowing are met, i.e. sowing conditions, the quality of sowing depends upon the planting mechanism. The following features of the planting mechanism are the most important: RPM of the seed disc, the travel speed of a seeder, and the values of gauge and vacuum pressure. This paper presents the results of sowing three different fractions of soybean seeds under laboratory conditions. The quality measurement of sowing was performed at different values of vacuum pressure and RPM of the seed disc. On balance, an increase in vacuum pressure results in improved sowing quality due to a stronger adherence of seeds to the seed disc. Lower values of vacuum pressure do not exert significant effects on the quality of sowing, regardless of the seed fraction. However, higher RPM of the seed disc entail an increase in the coefficient of variation. On the basis of the results obtained, a mathematical model for predicting changes in the coefficient of variation of sowing quality was developed using different operating parameters.
Spencer, Thomas; Schuerch, Mark; Nicholls, Robert J.; Hinkel, Jochen; Lincke, Daniel; Vafeidis, A. T.; Reef, Ruth; McFadden, Loraine; Brown, Sally
2016-04-01
The Dynamic Interactive Vulnerability Assessment Wetland Change Model (DIVA_WCM) comprises a dataset of contemporary global coastal wetland stocks (estimated at 756 × 103 km2 (in 2011)), mapped to a one-dimensional global database, and a model of the macro-scale controls on wetland response to sea-level rise. Three key drivers of wetland response to sea-level rise are considered: 1) rate of sea-level rise relative to tidal range; 2) lateral accommodation space; and 3) sediment supply. The model is tuned by expert knowledge, parameterised with quantitative data where possible, and validated against mapping associated with two large-scale mangrove and saltmarsh vulnerability studies. It is applied across 12,148 coastal segments (mean length 85 km) to the year 2100. The model provides better-informed macro-scale projections of likely patterns of future coastal wetland losses across a range of sea-level rise scenarios and varying assumptions about the construction of coastal dikes to prevent sea flooding (as dikes limit lateral accommodation space and cause coastal squeeze). With 50 cm of sea-level rise by 2100, the model predicts a loss of 46-59% of global coastal wetland stocks. A global coastal wetland loss of 78% is estimated under high sea-level rise (110 cm by 2100) accompanied by maximum dike construction. The primary driver for high vulnerability of coastal wetlands to sea-level rise is coastal squeeze, a consequence of long-term coastal protection strategies. Under low sea-level rise (29 cm by 2100) losses do not exceed ca. 50% of the total stock, even for the same adverse dike construction assumptions. The model results confirm that the widespread paradigm that wetlands subject to a micro-tidal regime are likely to be more vulnerable to loss than macro-tidal environments. Countering these potential losses will require both climate mitigation (a global response) to minimise sea-level rise and maximisation of accommodation space and sediment supply (a regional
Hüser, Imke; Harder, Hartwig; Heil, Angelika; Kaiser, Johannes W.
2017-09-01
Lagrangian particle dispersion models (LPDMs) in backward mode are widely used to quantify the impact of transboundary pollution on downwind sites. Most LPDM applications count particles with a technique that introduces a so-called footprint layer (FL) with constant height, in which passing air tracer particles are assumed to be affected by surface emissions. The mixing layer dynamics are represented by the underlying meteorological model. This particle counting technique implicitly assumes that the atmosphere is well mixed in the FL. We have performed backward trajectory simulations with the FLEXPART model starting at Cyprus to calculate the sensitivity to emissions of upwind pollution sources. The emission sensitivity is used to quantify source contributions at the receptor and support the interpretation of ground measurements carried out during the CYPHEX campaign in July 2014. Here we analyse the effects of different constant and dynamic FL height assumptions. The results show that calculations with FL heights of 100 and 300 m yield similar but still discernible results. Comparison of calculations with FL heights constant at 300 m and dynamically following the planetary boundary layer (PBL) height exhibits systematic differences, with daytime and night-time sensitivity differences compensating for each other. The differences at daytime when a well-mixed PBL can be assumed indicate that residual inaccuracies in the representation of the mixing layer dynamics in the trajectories may introduce errors in the impact assessment on downwind sites. Emissions from vegetation fires are mixed up by pyrogenic convection which is not represented in FLEXPART. Neglecting this convection may lead to severe over- or underestimations of the downwind smoke concentrations. Introducing an extreme fire source from a different year in our study period and using fire-observation-based plume heights as reference, we find an overestimation of more than 60 % by the constant FL height
DEFF Research Database (Denmark)
Pena Diaz, Alfredo; Réthoré, Pierre-Elouan; Rathmann, Ole
2014-01-01
We evaluate a modified version of the Park wake model against power data from a west-east row in the middle of the Horns Rev I offshore wind farm. The evaluation is performed on data classified in four different atmospheric stability conditions, for a narrow wind speed range, and a wide range...... turbines on the row and those using the WAsP recommended value closer to the data for the first turbines. It is generally seen that under stable and unstable atmospheric conditions the power deficits are the highest and lowest, respectively, but the wind conditions under both stability regimes...
DEFF Research Database (Denmark)
Peña, Alfredo; Réthoré, Pierre-Elouan; Rathmann, Ole
2013-01-01
Here, we evaluate a modified version of the Park wake model against power data from a west-east row in the middle of the Horns Rev I offshore wind farm. The evaluation is performed on data classified in four different atmospheric stability conditions, for a narrow wind speed range, and a wide range...... turbines and those using the WAsP recommended value closer to the data for the first turbines. It is generally seen that under stable and unstable atmospheric conditions the power deficits are the highest and lowest, respectively, but the wind conditions under both stability regimes are different...
Korucu, Ayse; Miller, Richard
2016-11-01
Direct numerical simulations (DNS) of temporally developing shear flames are used to investigate both equation of state (EOS) and unity-Lewis (Le) number assumption effects in hydrocarbon flames at elevated pressure. A reduced Kerosene / Air mechanism including a semi-global soot formation/oxidation model is used to study soot formation/oxidation processes in a temporarlly developing hydrocarbon shear flame operating at both atmospheric and elevated pressures for the cubic Peng-Robinson real fluid EOS. Results are compared to simulations using the ideal gas law (IGL). The results show that while the unity-Le number assumption with the IGL EOS under-predicts the flame temperature for all pressures, with the real fluid EOS it under-predicts the flame temperature for 1 and 35 atm and over-predicts the rest. The soot mass fraction, Ys, is only under-predicted for the 1 atm flame for both IGL and real gas fluid EOS models. While Ys is over-predicted for elevated pressures with IGL EOS, for the real gas EOS Ys's predictions are similar to results using a non-unity Le model derived from non-equilibrium thermodynamics and real diffusivities. Adopting the unity Le assumption is shown to cause misprediction of Ys, the flame temperature, and the mass fractions of CO, H and OH.
40 CFR 264.150 - State assumption of responsibility.
2010-07-01
... FACILITIES Financial Requirements § 264.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure, post-closure care, or... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...
40 CFR 261.150 - State assumption of responsibility.
2010-07-01
... Excluded Hazardous Secondary Materials § 261.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure or liability... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...
40 CFR 265.150 - State assumption of responsibility.
2010-07-01
..., STORAGE, AND DISPOSAL FACILITIES Financial Requirements § 265.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...
40 CFR 144.66 - State assumption of responsibility.
2010-07-01
... PROGRAMS (CONTINUED) UNDERGROUND INJECTION CONTROL PROGRAM Financial Responsibility: Class I Hazardous Waste Injection Wells § 144.66 State assumption of responsibility. (a) If a State either assumes legal... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State assumption of responsibility...
40 CFR 267.150 - State assumption of responsibility.
2010-07-01
... STANDARDIZED PERMIT Financial Requirements § 267.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure care or liability... 40 Protection of Environment 26 2010-07-01 2010-07-01 false State assumption of responsibility...
40 CFR 761.2 - PCB concentration assumptions for use.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false PCB concentration assumptions for use..., AND USE PROHIBITIONS General § 761.2 PCB concentration assumptions for use. (a)(1) Any person may..., oil-filled cable, and rectifiers whose PCB concentration is not established contain PCBs at < 50 ppm...
Distributed automata in an assumption-commitment framework
Indian Academy of Sciences (India)
We propose a class of ﬁnite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the ...
Basic assumptions in statistical analyses of data in biomedical ...
African Journals Online (AJOL)
If one or more assumptions are violated, an alternative procedure must be used to obtain valid results. This article aims at highlighting some basic assumptions in statistical analyses of data in biomedical sciences. Keywords: samples, independence, non-parametric, parametric, statistical analyses. Int. J. Biol. Chem. Sci. Vol.
Reservoir management under geological uncertainty using fast model update
Hanea, R.; Evensen, G.; Hustoft, L.; Ek, T.; Chitu, A.; Wilschut, F.
2015-01-01
Statoil is implementing "Fast Model Update (FMU)," an integrated and automated workflow for reservoir modeling and characterization. FMU connects all steps and disciplines from seismic depth conversion to prediction and reservoir management taking into account relevant reservoir uncertainty. FMU
Directory of Open Access Journals (Sweden)
Yi Zheng
2018-03-01
Full Text Available This research studied the duopoly manufacturers’ decision-making considering green technology investment and under a cap-and-trade system. It was assumed there were two manufacturers producing products which were substitutable for one another. On the basis of this assumption, the optimal production capacity, price, and green technology investment of the duopoly manufacturers under a cap-and-trade system were obtained. The increase or decrease of the optimal production quantity of the duopoly manufacturers under a cap-and-trade system was decided by their green technology level. The increase of the optimal price as well as the increase or decrease of the maximum expected profits were decided by the initial carbon emission quota granted by the government. Our research indicates that the carbon emission of unit product is inversely proportional to the market share of an enterprise and becomes an important index to measure the core competitiveness of an enterprise.
Finite element modelling of helmeted head impact under frontal ...
Indian Academy of Sciences (India)
Abstract. Finite element models of the head and helmet were used to study contact forces during frontal impact of the head with a rigid surface. The finite element model of the head consists of skin, skull, cerebro-spinal fluid (CSF), brain, tentorium and falx. The finite element model of the helmet consists of shell and foam.
PFP issues/assumptions development and management planning guide
International Nuclear Information System (INIS)
SINCLAIR, J.C.
1999-01-01
The PFP Issues/Assumptions Development and Management Planning Guide presents the strategy and process used for the identification, allocation, and maintenance of an Issues/Assumptions Management List for the Plutonium Finishing Plant (PFP) integrated project baseline. Revisions to this document will include, as attachments, the most recent version of the Issues/Assumptions Management List, both open and current issues/assumptions (Appendix A), and closed or historical issues/assumptions (Appendix B). This document is intended be a Project-owned management tool. As such, this document will periodically require revisions resulting from improvements of the information, processes, and techniques as now described. Revisions that suggest improved processes will only require PFP management approval
Validation of spectral gas radiation models under oxyfuel conditions
Energy Technology Data Exchange (ETDEWEB)
Becher, Johann Valentin
2013-05-15
Combustion of hydrocarbon fuels with pure oxygen results in a different flue gas composition than combustion with air. Standard computational-fluid-dynamics (CFD) spectral gas radiation models for air combustion are therefore out of their validity range in oxyfuel combustion. This thesis provides a common spectral basis for the validation of new spectral models. A literature review about fundamental gas radiation theory, spectral modeling and experimental methods provides the reader with a basic understanding of the topic. In the first results section, this thesis validates detailed spectral models with high resolution spectral measurements in a gas cell with the aim of recommending one model as the best benchmark model. In the second results section, spectral measurements from a turbulent natural gas flame - as an example for a technical combustion process - are compared to simulated spectra based on measured gas atmospheres. The third results section compares simplified spectral models to the benchmark model recommended in the first results section and gives a ranking of the proposed models based on their accuracy. A concluding section gives recommendations for the selection and further development of simplified spectral radiation models. Gas cell transmissivity spectra in the spectral range of 2.4 - 5.4 {mu}m of water vapor and carbon dioxide in the temperature range from 727 C to 1500 C and at different concentrations were compared in the first results section at a nominal resolution of 32 cm{sup -1} to line-by-line models from different databases, two statistical-narrow-band models and the exponential-wide-band model. The two statistical-narrow-band models EM2C and RADCAL showed good agreement with a maximal band transmissivity deviation of 3 %. The exponential-wide-band model showed a deviation of 6 %. The new line-by-line database HITEMP2010 had the lowest band transmissivity deviation of 2.2% and was therefore recommended as a reference model for the
Chiapetto, M.; Malerba, L.; Becquart, C. S.
2015-07-01
This work extends the object kinetic Monte Carlo model for neutron irradiation-induced nanostructure evolution in Fe-C binary alloys developed in [1], introducing the effects of substitutional solutes like Mn and Ni. The objective is to develop a model able to describe the nanostructural evolution of both vacancy and self-interstitial atom (SIA) defect cluster populations in Fe(C)MnNi neutron-irradiated model alloys at the operational temperature of light water reactors (∼300 °C), by simulating specific reference irradiation experiments. To do this, the effects of the substitutional solutes of interest are introduced, under simplifying assumptions, using a "grey alloy" scheme. Mn and Ni solute atoms are not explicitly introduced in the model, which therefore cannot describe their redistribution under irradiation, but their effect is introduced by modifying the parameters that govern the mobility of both SIA and vacancy clusters. In particular, the reduction of the mobility of point-defect clusters as a consequence of the presence of solutes proved to be key to explain the experimentally observed disappearance of detectable defect clusters with increasing solute content. Solute concentration is explicitly taken into account in the model as a variable determining the slowing down of self-interstitial clusters; small vacancy clusters, on the other hand, are assumed to be significantly slowed down by the presence of solutes, while for clusters bigger than 10 vacancies their complete immobility is postulated. The model, which is fully based on physical considerations and only uses a few parameters for calibration, is found to be capable of reproducing the experimental trends in terms of density and size distribution of the irradiation-induced defect populations with dose, as compared to the reference experiment, thereby providing insight into the physical mechanisms that influence the nanostructural evolution undergone by this material during irradiation.
Assessing and relaxing assumptions in quasi-simplex models
Lugtig, Peter|info:eu-repo/dai/nl/304824658; Cernat, Alexandru; Uhrig, Noah; Watson, Nicole
2014-01-01
Panel data (repeated measures of the same individuals) has become more and more popular in research as it has a number of unique advantages such as enabling researchers to answer questions about individual change and help deal (partially) with the issues linked to causality. But this type of data
Numerical Simulation of the Heston Model under Stochastic Correlation
Directory of Open Access Journals (Sweden)
Long Teng
2017-12-01
Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.
Li, Jiahui; Yu, Qiqing
2016-01-01
Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.
Estimation and prediction under local volatility jump-diffusion model
Kim, Namhyoung; Lee, Younhee
2018-02-01
Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.
Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.
Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A
2017-04-01
The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.
Finite element modelling of helmeted head impact under frontal ...
Indian Academy of Sciences (India)
2016-08-26
Aug 26, 2016 ... Finite element models of the head and helmet were used to study contact forces during frontal impact of the head with a rigid surface. The ﬁnite element model of the head consists of skin, skull, cerebro-spinal ﬂuid (CSF), brain, tentorium and falx. The ﬁnite element model of the helmet consists of shell and ...
Development and Validation of a Minichannel Evaporator Model under Dehumidification
Hassan, Abdelrahman Hussein Abdelhalim
2016-01-01
[EN] In the first part of the current thesis, two fundamental numerical models (Fin2D-W and Fin1D-MB) for analyzing the air-side performance of minichannel evaporators were developed and verified. The Fin2D-W model applies a comprehensive two-dimensional scheme to discretize the evaporator. On the other hand, the Fin1D-MB model is based on the one-dimensional fin theory in conjunction with the moving boundaries technique along the fin height. The first objective of the two presented models is...
Multitasking TORT under UNICOS: Parallel performance models and measurements
International Nuclear Information System (INIS)
Barnett, A.; Azmy, Y.Y.
1999-01-01
The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead
Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements
International Nuclear Information System (INIS)
Azmy, Y.Y.; Barnett, D.A.
1999-01-01
The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead
Modeling delamination of FRP laminates under low velocity impact
Jiang, Z.; Wen, H. M.; Ren, S. L.
2017-09-01
Fiber reinforced plastic laminates (FRP) have been increasingly used in various engineering such as aeronautics, astronautics, transportation, naval architecture and their impact response and failure are a major concern in academic community. A new numerical model is suggested for fiber reinforced plastic composites. The model considers that FRP laminates has been constituted by unidirectional laminated plates with adhesive layers. A modified adhesive layer damage model that considering strain rate effects is incorporated into the ABAQUS / EXPLICIT finite element program by the user-defined material subroutine VUMAT. It transpires that the present model predicted delamination is in good agreement with the experimental results for low velocity impact.
Li, T.; Hasegawa, T.; Yin, X.; Zhu, Y.; Boote, K.; Adam, M.; Bregaglio, S.; Buis, S.; Confalonieri, R.; Fumoto, T.; Gaydon, D.; Marcaida III, M.; Nakagawa, H.; Oriol, P.; Ruane, A.C.; Ruget, F.; Singh, B.; Singh, U.; Tang, L.; Yoshida, H.; Zhang, Z.; Bouman, B.
2015-01-01
Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We
On some unwarranted tacit assumptions in cognitive neuroscience.
Mausfeld, Rainer
2012-01-01
The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input-output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings.
On Some Unwarranted Tacit Assumptions in Cognitive Neuroscience†
Mausfeld, Rainer
2011-01-01
The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input–output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings. PMID:22435062
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet
2012-01-01
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
2011-05-23
... Part 50 RIN 1105-AB38 Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian... State criminal jurisdiction under Public Law 280 (18 U.S.C. 1162(a)) to request that the United States accept concurrent criminal jurisdiction within the tribe's Indian country, and for the Attorney General...
2011-04-15
... covered by title IV of the Employee Retirement Income Security Act of 1974. DATES: Effective May 1, 2011...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in Appendix B to Part 4022 to...
2012-12-21
... Plans to prescribe interest assumptions for valuation dates in the first quarter of 2013. The interest... plan benefits under terminating single-employer plans covered by title IV of the Employee Retirement... regulation are updated quarterly and are intended to reflect current conditions in the financial and annuity...
Baroukh, Caroline; Muñoz-Tamayo, Rafael; Steyer, Jean-Philippe; Bernard, Olivier
2014-01-01
Metabolic modeling is a powerful tool to understand, predict and optimize bioprocesses, particularly when they imply intracellular molecules of interest. Unfortunately, the use of metabolic models for time varying metabolic fluxes is hampered by the lack of experimental data required to define and calibrate the kinetic reaction rates of the metabolic pathways. For this reason, metabolic models are often used under the balanced growth hypothesis. However, for some processes such as the photoautotrophic metabolism of microalgae, the balanced-growth assumption appears to be unreasonable because of the synchronization of their circadian cycle on the daily light. Yet, understanding microalgae metabolism is necessary to optimize the production yield of bioprocesses based on this microorganism, as for example production of third-generation biofuels. In this paper, we propose DRUM, a new dynamic metabolic modeling framework that handles the non-balanced growth condition and hence accumulation of intracellular metabolites. The first stage of the approach consists in splitting the metabolic network into sub-networks describing reactions which are spatially close, and which are assumed to satisfy balanced growth condition. The left metabolites interconnecting the sub-networks behave dynamically. Then, thanks to Elementary Flux Mode analysis, each sub-network is reduced to macroscopic reactions, for which simple kinetics are assumed. Finally, an Ordinary Differential Equation system is obtained to describe substrate consumption, biomass production, products excretion and accumulation of some internal metabolites. DRUM was applied to the accumulation of lipids and carbohydrates of the microalgae Tisochrysis lutea under day/night cycles. The resulting model describes accurately experimental data obtained in day/night conditions. It efficiently predicts the accumulation and consumption of lipids and carbohydrates. PMID:25105494
Energy Technology Data Exchange (ETDEWEB)
Chiapetto, M., E-mail: mchiapet@sckcen.be [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium); Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Becquart, C.S. [Unité Matériaux Et Transformations (UMET), UMR 8207, Université de Lille 1, ENSCL, F-59600 Villeneuve d’Ascq Cedex (France); Laboratoire commun EDF-CNRS Etude et Modélisation des Microstructures pour le Vieillissement des Matériaux (EM2VM) (France); Domain, C. [EDF R& D, Département Matériaux et Mécanique des Composants, Les Renardières, F-77250 Moret sur Loing (France); Laboratoire commun EDF-CNRS Etude et Modélisation des Microstructures pour le Vieillissement des Matériaux (EM2VM) (France); Malerba, L. [SCK-CEN, Nuclear Materials Science Institute, Boeretang 200, B-2400 Mol (Belgium)
2015-06-01
Radiation-induced embrittlement of bainitic steels is one of the most important lifetime limiting factors of existing nuclear light water reactor pressure vessels. The primary mechanism of embrittlement is the obstruction of dislocation motion produced by nanometric defect structures that develop in the bulk of the material due to irradiation. The development of models that describe, based on physical mechanisms, the nanostructural changes in these types of materials due to neutron irradiation are expected to help to better understand which features are mainly responsible for embrittlement. The chemical elements that are thought to influence most the response under irradiation of low-Cu RPV steels, especially at high fluence, are Ni and Mn, hence there is an interest in modelling the nanostructure evolution in irradiated FeMnNi alloys. As a first step in this direction, we developed sets of parameters for object kinetic Monte Carlo (OKMC) simulations that allow this to be done, under simplifying assumptions, using a “grey alloy” approach that extends the already existing OKMC model for neutron irradiated Fe–C binary alloys [1]. Our model proved to be able to describe the trend in the buildup of irradiation defect populations at the operational temperature of LWR (∼300 °C), in terms of both density and size distribution of the defect cluster populations, in FeMnNi model alloys as compared to Fe–C. In particular, the reduction of the mobility of point-defect clusters as a consequence of the presence of solutes proves to be key to explain the experimentally observed disappearance of detectable point-defect clusters with increasing solute content.
Blanquart, François; Lehtinen, Sonja; Fraser, Christophe
2017-05-31
The frequency of resistance to antibiotics in Streptococcus pneumoniae has been stable over recent decades. For example, penicillin non-susceptibility in Europe has fluctuated between 12% and 16% without any major time trend. In spite of long-term stability, resistance fluctuates over short time scales, presumably in part due to seasonal fluctuations in antibiotic prescriptions. Here, we develop a model that describes the evolution of antibiotic resistance under selection by multiple antibiotics prescribed at seasonally changing rates. This model was inspired by, and fitted to, published data on monthly antibiotics prescriptions and frequency of resistance in two communities in Israel over 5 years. Seasonal fluctuations in antibiotic usage translate into small fluctuations of the frequency of resistance around the average value. We describe these dynamics using a perturbation approach that encapsulates all ecological and evolutionary forces into a generic model, whose parameters quantify a force stabilizing the frequency of resistance around the equilibrium and the sensitivity of the population to antibiotic selection. Fitting the model to the data revealed a strong stabilizing force, typically two to five times stronger than direct selection due to antibiotics. The strong stabilizing force explains that resistance fluctuates in phase with usage, as antibiotic selection alone would result in resistance fluctuating behind usage with a lag of three months when antibiotic use is seasonal. While most antibiotics selected for increased resistance, intriguingly, cephalosporins selected for decreased resistance to penicillins and macrolides, an effect consistent in the two communities. One extra monthly prescription of cephalosporins per 1000 children decreased the frequency of penicillin-resistant strains by 1.7%. This model emerges under minimal assumptions, quantifies the forces acting on resistance and explains up to 43% of the temporal variation in resistance.
Directory of Open Access Journals (Sweden)
Caroline Baroukh
Full Text Available Metabolic modeling is a powerful tool to understand, predict and optimize bioprocesses, particularly when they imply intracellular molecules of interest. Unfortunately, the use of metabolic models for time varying metabolic fluxes is hampered by the lack of experimental data required to define and calibrate the kinetic reaction rates of the metabolic pathways. For this reason, metabolic models are often used under the balanced growth hypothesis. However, for some processes such as the photoautotrophic metabolism of microalgae, the balanced-growth assumption appears to be unreasonable because of the synchronization of their circadian cycle on the daily light. Yet, understanding microalgae metabolism is necessary to optimize the production yield of bioprocesses based on this microorganism, as for example production of third-generation biofuels. In this paper, we propose DRUM, a new dynamic metabolic modeling framework that handles the non-balanced growth condition and hence accumulation of intracellular metabolites. The first stage of the approach consists in splitting the metabolic network into sub-networks describing reactions which are spatially close, and which are assumed to satisfy balanced growth condition. The left metabolites interconnecting the sub-networks behave dynamically. Then, thanks to Elementary Flux Mode analysis, each sub-network is reduced to macroscopic reactions, for which simple kinetics are assumed. Finally, an Ordinary Differential Equation system is obtained to describe substrate consumption, biomass production, products excretion and accumulation of some internal metabolites. DRUM was applied to the accumulation of lipids and carbohydrates of the microalgae Tisochrysis lutea under day/night cycles. The resulting model describes accurately experimental data obtained in day/night conditions. It efficiently predicts the accumulation and consumption of lipids and carbohydrates.
A model for cooling systems analysis under natural convection
International Nuclear Information System (INIS)
Santos, S.J. dos.
1988-01-01
The present work analyses thermosyphons and their non dimensional numbers. The mathematical model considers constant pressure, single-phase incompressible flow. It simulates both open and closed thermosyphons, and deals with heat sources like PWR cores of electrical heaters and cold sinks like heat exchangers or reservoirs. A computer code named STRATS was developed based on this model. (author)
Modeling detour behavior of pedestrian dynamics under different conditions
Qu, Yunchao; Xiao, Yao; Wu, Jianjun; Tang, Tao; Gao, Ziyou
2018-02-01
Pedestrian simulation approach has been widely used to reveal the human behavior and evaluate the performance of crowd evacuation. In the existing pedestrian simulation models, the social force model is capable of predicting many collective phenomena. Detour behavior occurs in many cases, and the important behavior is a dominate factor of the crowd evacuation efficiency. However, limited attention has been attracted for analyzing and modeling the characteristics of detour behavior. In this paper, a modified social force model integrated by Voronoi diagram is proposed to calculate the detour direction and preferred velocity. Besides, with the consideration of locations and velocities of neighbor pedestrians, a Logit-based choice model is built to describe the detour direction choice. The proposed model is applied to analyze pedestrian dynamics in a corridor scenario with either unidirectional or bidirectional flow, and a building scenario in real-world. Simulation results show that the modified social force model including detour behavior could reduce the frequency of collision and deadlock, increase the average speed of the crowd, and predict more practical crowd dynamics with detour behavior. This model can also be potentially applied to understand the pedestrian dynamics and design emergent management strategies for crowd evacuations.
A model for optimization of process integration investments under uncertainty
International Nuclear Information System (INIS)
Svensson, Elin; Stroemberg, Ann-Brith; Patriksson, Michael
2011-01-01
The long-term economic outcome of energy-related industrial investment projects is difficult to evaluate because of uncertain energy market conditions. In this article, a general, multistage, stochastic programming model for the optimization of investments in process integration and industrial energy technologies is proposed. The problem is formulated as a mixed-binary linear programming model where uncertainties are modelled using a scenario-based approach. The objective is to maximize the expected net present value of the investments which enables heat savings and decreased energy imports or increased energy exports at an industrial plant. The proposed modelling approach enables a long-term planning of industrial, energy-related investments through the simultaneous optimization of immediate and later decisions. The stochastic programming approach is also suitable for modelling what is possibly complex process integration constraints. The general model formulation presented here is a suitable basis for more specialized case studies dealing with optimization of investments in energy efficiency. -- Highlights: → Stochastic programming approach to long-term planning of process integration investments. → Extensive mathematical model formulation. → Multi-stage investment decisions and scenario-based modelling of uncertain energy prices. → Results illustrate how investments made now affect later investment and operation opportunities. → Approach for evaluation of robustness with respect to variations in probability distribution.
A particle model of rolling grain ripples under waves
DEFF Research Database (Denmark)
Andersen, Ken Haste
2001-01-01
A simple model for the formation of rolling grain ripples on a flat sand bed by the oscillatory flow generated by a surface wave is presented. An equation of motion is derived for the individual ripples, seen as "particles," on the otherwise flat bed. The model accounts for the initial appearance...... with the square-root of the nondimensional shear stress (the Shields parameter) on a flat bed. The results of the model are compared with measurements, and reasonable agreement between the model and the measurements is demonstrated. ©2001 American Institute of Physics....... of the ripples, the subsequent coarsening of the ripples, and the final equilibrium state. The model is related to the physical parameters of the problem, and an analytical approximation for the equilibrium spacing of the ripples is developed. It is found that the spacing between the ripples scales...
Mathematical Modeling of Column-Base Connections under Monotonic Loading
Directory of Open Access Journals (Sweden)
Gholamreza Abdollahzadeh
2014-12-01
Full Text Available Some considerable damage to steel structures during the Hyogo-ken Nanbu Earthquake occurred. Among them, many exposed-type column bases failed in several consistent patterns, such as brittle base plate fracture, excessive bolt elongation, unexpected early bolt failure, and inferior construction work, etc. The lessons from these phenomena led to the need for improved understanding of column base behavior. Joint behavior must be modeled when analyzing semi-rigid frames, which is associated with a mathematical model of the moment–rotation curve. The most accurate model uses continuous nonlinear functions. This article presents three areas of steel joint research: (1 analysis methods of semi-rigid joints; (2 prediction methods for the mechanical behavior of joints; (3 mathematical representations of the moment–rotation curve. In the current study, a new exponential model to depict the moment–rotation relationship of column base connection is proposed. The proposed nonlinear model represents an approach to the prediction of M–θ curves, taking into account the possible failure modes and the deformation characteristics of the connection elements. The new model has three physical parameters, along with two curve-fitted factors. These physical parameters are generated from dimensional details of the connection, as well as the material properties. The M–θ curves obtained by the model are compared with published connection tests and 3D FEM research. The proposed mathematical model adequately comes close to characterizing M–θ behavior through the full range of loading/rotations. As a result, modeling of column base connections using the proposed mathematical model can give crucial beforehand information, and overcome the disadvantages of time consuming workmanship and cost of experimental studies.
Shi, Wen; Kleijnen, J.P.C.
2017-01-01
Sequential bifurcation (or SB) is an efficient and effective factor-screening method; i.e., SB quickly identifies the important factors (inputs) in experiments with simulation models that have very many factors—provided the SB assumptions are valid. The specific SB assumptions are: (i) a secondorder
MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT
International Nuclear Information System (INIS)
R.E. Sweeney
2001-01-01
The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance
Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document
International Nuclear Information System (INIS)
Sweeney, R.
2000-01-01
The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance
A conceptual ENSO model under realistic noise forcing
Directory of Open Access Journals (Sweden)
J. Saynisch
2006-01-01
Full Text Available We investigated the influence of atmospheric noise on the generation of interannual El Niño variability. Therefore, we perturbed a conceptual ENSO delay model with surrogate windstress data generated from tropical windspeed measurements. The effect of the additional stochastic forcing was studied for various parameter sets including periodic and chaotic regimes. The evaluation was based on a spectrum and amplitude-period relation comparison between model and measured sea surface temperature data. The additional forcing turned out to increase the variability of the model output in general. The noise-free model was unable to reproduce the observed spectral bandwidth for any choice of parameters. On the contrary, the stochastically forced model is capable of producing a realistic spectrum. The weakly nonlinear regimes of the model exhibit a proportional relation between amplitude and period matching the relation derived from measurement data. The chaotic regime, however, shows an inversely proportional relation. A stability analysis of the different regimes revealed that the spectra of the weakly nonlinear regimes are robust against slight parameter changes representing disregarded physical mechanisms, whereas the chaotic regime exhibits a very unstable realistic spectrum. We conclude that the model including stochastic forcing in a parameter range of moderate nonlinearity best matches the real conditions. This suggests that atmospheric noise plays an important role in the coupled tropical pacific ocean-atmosphere system.
Propulsion Physics Under the Changing Density Field Model
Robertson, Glen A.
2011-01-01
To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
Vertebral stress of a cervical spine model under dynamic load.
Sadegh, A M; Tchako, A
2000-01-01
The objective of this study is to develop cervical spine models that predict the stresses in each vertebra by taking account of the biodynamic characteristics of the neck. The loads and the moments at the head point (Occipital Condyle) used for the models were determined by the rigid body dynamic response of the head due to G-z acceleration. The experimental data used were collected from the biodynamic responses of human volunteers during an acceleration in the z direction on the drop tower facility at Armstrong Laboratory at Wright Patterson Air Force Base (WPAFB). Three finite element models were developed: an elastic local model, viscoelastic local model and complete viscoelastic model. I-DEAS software was used to create the solid models, the loadings and the boundary conditions. Then, ABAQUS finite element software was employed to solve the models, and thus the stresses on each vertebral level were determined. Beam elements with different properties were employed to simulate the ligaments, articular facets and muscles. The complete viscoelastic model was subjected to 11 cases of loadings ranging from 8 G-z to 20 G-z accelerations. The von Mises and Maximum Principal stress fields, which are good indicators of bone failure, were calculated for all the cases. The results indicated that the maximum stress in all cases increased as the magnitude of the acceleration increased. The stresses in the 10 to 12 G-z cases were comfortably below the injury threshold level. The majority of the maximum stresses occurred in C6 and C4 regions.
Economic-mathematical methods and models under uncertainty
Aliyev, A G
2013-01-01
Brief Information on Finite-Dimensional Vector Space and its Application in EconomicsBases of Piecewise-Linear Economic-Mathematical Models with Regard to Influence of Unaccounted Factors in Finite-Dimensional Vector SpacePiecewise Linear Economic-Mathematical Models with Regard to Unaccounted Factors Influence in Three-Dimensional Vector SpacePiecewise-Linear Economic-Mathematical Models with Regard to Unaccounted Factors Influence on a PlaneBases of Software for Computer Simulation and Multivariant Prediction of Economic Even at Uncertainty Conditions on the Base of N-Comp
Finite element modeling of Balsa wood structures under severe loadings
International Nuclear Information System (INIS)
Toson, B.; Pesque, J.J.; Viot, P.
2014-01-01
In order to compute, in various situations, the requirements for transporting packages using Balsa wood as an energy absorber, a constitutive model is needed that takes into account all of the specific characteristics of the wood, such as its anisotropy, compressibility, softening, densification, and strain rate dependence. Such a model must also include the treatment of rupture of the wood when it is in traction. The complete description of wood behavior is not sufficient: robustness is also necessary because this model has to work in presence of large deformations and of many other external nonlinear phenomena in the surrounding structures. We propose such a constitutive model that we have developed using the commercial finite element package ABAQUS. The necessary data were acquired through an extensive compilation of the existing literature with the augmentation of personal measurements. Numerous validation tests are presented that represent different impact situations that a transportation cask might endure. (authors)
Calibration of CORSIM models under saturated traffic flow conditions.
2013-09-01
This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....
Model Justified Search Algorithms for Scheduling Under Uncertainty
National Research Council Canada - National Science Library
Howe, Adele; Whitley, L. D
2008-01-01
.... We also identified plateaus as a significant barrier to superb performance of local search on scheduling and have studied several canonical discrete optimization problems to discover and model the nature of plateaus...
Finite element modelling of helmeted head impact under frontal ...
Indian Academy of Sciences (India)
CSF), brain, tentorium and falx. The finite element model of the helmet consists of shell and foam liner. ... mechanical behaviour of motorcycle helmet. ... the latter authors use a SI (Structural Intensity) approach to study power flow distribution.
Shriver, K A
1986-01-01
Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.
Quasi-experimental study designs series-paper 7: assessing the assumptions.
Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian
2017-09-01
Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.
Parabolic Free Boundary Price Formation Models Under Market Size Fluctuations
Markowich, Peter A.
2016-10-04
In this paper we propose an extension of the Lasry-Lions price formation model which includes uctuations of the numbers of buyers and vendors. We analyze the model in the case of deterministic and stochastic market size uctuations and present results on the long time asymptotic behavior and numerical evidence and conjectures on periodic, almost periodic, and stochastic uctuations. The numerical simulations extend the theoretical statements and give further insights into price formation dynamics.
Trade-offs underlying maternal breastfeeding decisions: A conceptual model
Tully, Kristin P.; Ball, Helen L.
2011-01-01
This paper presents a new conceptual model that generates predictions about breastfeeding decisions and identifies interactions that affect outcomes. We offer a contextual approach to infant feeding that models multi-directional influences by expanding on the evolutionary parent–offspring conflict and situation-specific breastfeeding theories. The main hypothesis generated from our framework suggests that simultaneously addressing breastfeeding costs and benefits, in relation to how they are ...
Cube Handling In Backgammon Money Games Under a Jump Model
Higgins, Mark G.
2012-01-01
A variation on Janowski's cubeful equity model is proposed for cube handling in backgammon money games. Instead of approximating the cubeful take point as an interpolation between the dead and live cube limits, a new model is developed where the cubeless probability of win evolves through a series of random jumps instead of continuous diffusion. Each jump is drawn from a distribution with zero mean and an expected absolute jump size called the "jump volatility" that can be a function of game ...
Reflood modeling under oscillatory flow conditions with Cathare
International Nuclear Information System (INIS)
Kelly, J.M.; Bartak, J.; Janicot, A.
1993-01-01
The problems and the current status in oscillatory reflood modelling with the CATHARE code are presented. The physical models used in CATHARE for reflood modelling predicted globally very well the forced reflood experiments. Significant drawbacks existed in predicting experiments with oscillatory flow (both forced and gravity driven). First, the more simple case of forced flow oscillations was analyzed. Modelling improvements within the reflooding package resolved the problem of quench front blockages and unphysical oscillations. Good agreements with experiment for the ERSEC forced oscillations reflood tests is now obtained. For gravity driven reflood, CATHARE predicted sustained flow oscillations during 100-150 s after the start of the reflood, whereas in the experiment flow oscillations were observed only during 25-30 s. Possible areas of modeling improvements are identified and several new correlations are suggested. The first test calculations of the BETHSY test 6.7A4 have shown that the oscillations are mostly sensitive to heat flux modeling downstream of the quench front. A much better agreement between CATHARE results and the experiment was obtained. However, further effort is necessary to obtain globally satisfactory predictions of gravity driven system reflood tests. (authors) 6 figs., 35 refs
Triatominae as a model of morphological plasticity under ecological pressure
Dujardin, Jean-Pierre; Panzera, P.; Schofield, C.J.
1999-01-01
The use of biochemical and genetic characters to explore species or population relationships has been applied to taxonomic questions since the 60s. In responding to the central question of the evolutionary history of #Triatominae$, i.e. their monophyletic or polyphyletic origin, two important questions arise (i) to what extent is the morphologically-based classification valid for assessing phylogenetic relationships ? and (ii) what are the main mechanisms underlying speciation in #Triatominae...
Mathematic modelling of circular cylinder deformation under inner grouwth
Directory of Open Access Journals (Sweden)
A. V. Siasiev
2009-09-01
Full Text Available A task on the intensive deformed state (IDS of a viscoelastic declivous cylinder, which is grown under the action of inner pressure, is considered. The process of continuous increase takes a place on an internal radius so, that a radius and pressure change on set to the given law. The special case of linear law of creeping is considered, and also numeral results are presented as the graphs of temporal dependence of tensions and moving for different points of cylinder.
Multidisciplinary Design Optimization Under Uncertainty: An Information Model Approach (PREPRINT)
2011-03-01
preferences in his Postulates I-IV to ensure rational decision making . Our approach to decision making satisfies the Savage postulates. Multicriteria ...Challenges associated with decision making for large complex systems in the pres- ence of uncertainty and risk have been of special interest to scientists...is heavily influenced by the government reports cited. Of spe- cial interest to us are the challenges associated with the decision making under
Three representations of the Ising model
Kruis, J.; Maris, G.
2016-01-01
Statistical models that analyse (pairwise) relations between variables encompass assumptions about the underlying mechanism that generated the associations in the observed data. In the present paper we demonstrate that three Ising model representations exist that, although each proposes a distinct
Hydrodynamic modelling of small upland lakes under strong wind forcing
Morales, L.; French, J.; Burningham, H.
2012-04-01
Small lakes (Area important source of water supply. Lakes also provide an important sedimentary archive of environmental and climate changes and ecosystem function. Hydrodynamic controls on the transport and distribution of lake sediments, and also seasonal variations in thermal structure due to solar radiation, precipitation, evaporation and mixing and the complex vertical and horizontal circulation patterns induced by the action of wind are not very well understood. The work presented here analyses hydrodynamic motions present in small upland lakes due to circulation and internal scale waves, and their linkages with the distribution of bottom sediment accumulation in the lake. For purpose, a 3D hydrodynamic is calibrated and implemented for Llyn Conwy, a small oligotrophic upland lake in North Wales, UK. The model, based around the FVCOM open source community model code, resolves the Navier-Stokes equations using a 3D unstructured mesh and a finite volume scheme. The model is forced by meteorological boundary conditions. Improvements made to the FVCOM code include a new graphical user interface to pre- and post process the model input and results respectively, and a JONSWAT wave model to include the effects of wind-wave induced bottom stresses on lake sediment dynamics. Modelled internal scale waves are validated against summer temperature measurements acquired from a thermistor chain deployed at the deepest part of the lake. Seiche motions were validated using data recorded by high-frequency level sensors around the lake margins, and the velocity field and the circulation patterns were validated using the data recorded by an ADCP and GPS drifters. The model is shown to reproduce the lake hydrodynamics and reveals well-developed seiches at different frequencies superimposed on wind-driven circulation patterns that appear to control the distribution of bottom sediments in this small upland lake.
Supporting calculations and assumptions for use in WESF safetyanalysis
Energy Technology Data Exchange (ETDEWEB)
Hey, B.E.
1997-03-07
This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.
Calibration under uncertainty for finite element models of masonry monuments
Energy Technology Data Exchange (ETDEWEB)
Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin
2010-02-01
Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, and there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.
Kozhevnikov, I. V.; Buzmakov, A. V.; Siewert, F.; Tiedtke, K.; Störmer, M.; Samoylova, L.; Sinn, H.
2017-05-01
Simple analytic equation is deduced to explain new physical phenomenon detected experimentally: growth of nano-dots (40-55 nm diameter, 8-13 nm height, 9.4 dots/μm2 surface density) on the grazing incidence mirror surface under the three years irradiation by the free electron laser FLASH (5-45 nm wavelength, 3 degrees grazing incidence angle). The growth model is based on the assumption that the growth of nano-dots is caused by polymerization of incoming hydrocarbon molecules under the action of incident photons directly or photoelectrons knocked out from a mirror surface. The key feature of our approach consists in that we take into account the radiation intensity variation nearby a mirror surface in an explicit form, because the polymerization probability is proportional to it. We demonstrate that the simple analytic approach allows to explain all phenomena observed in experiment and to predict new effects. In particular, we show that the nano-dots growth depends crucially on the grazing angle of incoming beam and its intensity: growth of nano-dots is observed in the limited from above and below intervals of the grazing angle and the radiation intensity. Decrease in the grazing angle by 1 degree only (from 3 to 2 degree) may result in a strong suppression of nanodots growth and their total disappearing. Similarly, decrease in the radiation intensity by several times (replacement of free electron laser by synchrotron) results also in disappearing of nano-dots growth.
Global modelling of river water quality under climate change
van Vliet, Michelle T. H.; Franssen, Wietse H. P.; Yearsley, John R.
2017-04-01
Climate change will pose challenges on the quality of freshwater resources for human use and ecosystems for instance by changing the dilution capacity and by affecting the rate of chemical processes in rivers. Here we assess the impacts of climate change and induced streamflow changes on a selection of water quality parameters for river basins globally. We used the Variable Infiltration Capacity (VIC) model and a newly developed global water quality module for salinity, temperature, dissolved oxygen and biochemical oxygen demand. The modelling framework was validated using observed records of streamflow, water temperature, chloride, electrical conductivity, dissolved oxygen and biochemical oxygen demand for 1981-2010. VIC and the water quality module were then forced with an ensemble of bias-corrected General Circulation Model (GCM) output for the representative concentration pathways RCP2.6 and RCP8.5 to study water quality trends and identify critical regions (hotspots) of water quality deterioration for the 21st century.
Behavioural modelling of irrigation decision making under water scarcity
Foster, T.; Brozovic, N.; Butler, A. P.
2013-12-01
Providing effective policy solutions to aquifer depletion caused by abstraction for irrigation is a key challenge for socio-hydrology. However, most crop production functions used in hydrological models do not capture the intraseasonal nature of irrigation planning, or the importance of well yield in land and water use decisions. Here we develop a method for determining stochastic intraseasonal water use that is based on observed farmer behaviour but is also theoretically consistent with dynamically optimal decision making. We use the model to (i) analyse the joint land and water use decision by farmers; (ii) to assess changes in behaviour and production risk in response to water scarcity; and (iii) to understand the limits of applicability of current methods in policy design. We develop a biophysical model of water-limited crop yield building on the AquaCrop model. The model is calibrated and applied to case studies of irrigated corn production in Nebraska and Texas. We run the model iteratively, using long-term climate records, to define two formulations of the crop-water production function: (i) the aggregate relationship between total seasonal irrigation and yield (typical of current approaches); and (ii) the stochastic response of yield and total seasonal irrigation to the choice of an intraseasonal soil moisture target and irrigated area. Irrigated area (the extensive margin decision) and per-area irrigation intensity (the intensive margin decision) are then calculated for different seasonal water restrictions (corresponding to regulatory policies) and well yield constraints on intraseasonal abstraction rates (corresponding to aquifer system limits). Profit- and utility-maximising decisions are determined assuming risk neutrality and varying degrees of risk aversion, respectively. Our results demonstrate that the formulation of the production function has a significant impact on the response to water scarcity. For low well yields, which are the major concern
Operation Cottage: A Cautionary Tale of Assumption and Perceptual Bias
2015-01-01
but they can also set a lethal trap for unsuspecting mission planners , decisionmakers, and intelli- gence analysts.2 Assumptions are extremely...the planning process, but the planning staff must not become so wedded to their assumptions that they reject or overlook information that is not in...operations specialist who had served as principal planner for the Attu invasion. Major General Charles Corlett was to command the landing force, an
SIS and SIR Epidemic Models Under Virtual Dispersal.
Bichara, Derdei; Kang, Yun; Castillo-Chavez, Carlos; Horan, Richard; Perrings, Charles
2015-11-01
We develop a multi-group epidemic framework via virtual dispersal where the risk of infection is a function of the residence time and local environmental risk. This novel approach eliminates the need to define and measure contact rates that are used in the traditional multi-group epidemic models with heterogeneous mixing. We apply this approach to a general n-patch SIS model whose basic reproduction number [Formula: see text] is computed as a function of a patch residence-time matrix [Formula: see text]. Our analysis implies that the resulting n-patch SIS model has robust dynamics when patches are strongly connected: There is a unique globally stable endemic equilibrium when [Formula: see text], while the disease-free equilibrium is globally stable when [Formula: see text]. Our further analysis indicates that the dispersal behavior described by the residence-time matrix [Formula: see text] has profound effects on the disease dynamics at the single patch level with consequences that proper dispersal behavior along with the local environmental risk can either promote or eliminate the endemic in particular patches. Our work highlights the impact of residence-time matrix if the patches are not strongly connected. Our framework can be generalized in other endemic and disease outbreak models. As an illustration, we apply our framework to a two-patch SIR single-outbreak epidemic model where the process of disease invasion is connected to the final epidemic size relationship. We also explore the impact of disease-prevalence-driven decision using a phenomenological modeling approach in order to contrast the role of constant versus state-dependent [Formula: see text] on disease dynamics.
Ejima, Keisuke; Aihara, Kazuyuki; Nishiura, Hiroshi
2013-01-01
The way we formulate a mathematical model of an infectious disease to capture symptomatic and asymptomatic transmission can greatly influence the likely effectiveness of vaccination in the presence of vaccine effect for preventing clinical illness. The present study aims to assess the impact of model building strategy on the epidemic threshold under vaccination. We consider two different types of mathematical models, one based on observable variables including symptom onset and recovery from clinical illness (hereafter, the "observable model") and the other based on unobservable information of infection event and infectiousness (the "unobservable model"). By imposing a number of modifying assumptions to the observable model, we let it mimic the unobservable model, identifying that the two models are fully consistent only when the incubation period is identical to the latent period and when there is no pre-symptomatic transmission. We also computed the reproduction numbers with and without vaccination, demonstrating that the data generating process of vaccine-induced reduction in symptomatic illness is consistent with the observable model only and examining how the effective reproduction number is differently calculated by two models. To explicitly incorporate the vaccine effect in reducing the risk of symptomatic illness into the model, it is fruitful to employ a model that directly accounts for disease progression. More modeling studies based on observable epidemiological information are called for.
The monster sporadic group and a theory underlying superstring models
International Nuclear Information System (INIS)
Chapline, G.
1996-09-01
The pattern of duality symmetries acting on the states of compactified superstring models reinforces an earlier suggestion that the Monster sporadic group is a hidden symmetry for superstring models. This in turn points to a supersymmetric theory of self-dual and anti-self-dual K3 manifolds joined by Dirac strings and evolving in a 13 dimensional spacetime as the fundamental theory. In addition to the usual graviton and dilaton this theory contains matter-like degrees of freedom resembling the massless states of the heterotic string, thus providing a completely geometric interpretation for ordinary matter. 25 refs
Triatominae as a model of morphological plasticity under ecological pressure
Directory of Open Access Journals (Sweden)
Dujardin JP
1999-01-01
Full Text Available The use of biochemical and genetic characters to explore species or population relationships has been applied to taxonomic questions since the 60s. In responding to the central question of the evolutionary history of Triatominae, i.e. their monophyletic or polyphyletic origin, two important questions arise (i to what extent is the morphologically-based classification valid for assessing phylogenetic relationships? and (ii what are the main mechanisms underlying speciation in Triatominae? Phenetic and genetic studies so far developed suggest that speciation in Triatominae may be a rapid process mainly driven by ecological factors.
Modified bond model for shear in slabs under concentrated loads
Lantsoght, E.O.L.; Van der Veen, C.; De Boer, A.
2015-01-01
Slabs subjected to concentrated loads close to supports, as occurring for truck loads on slab bridges, are less studied than beams in shear or slab-column connections in punching. To predict the shear capacity for this case, the Bond Model for concentric punching shear was studied initially.
THE FEATURES OF INNOVATIVE ACTIVITY UNDER THE OPEN INNOVATION MODEL
Directory of Open Access Journals (Sweden)
Julia P. Kulikova
2014-01-01
Full Text Available The article discusses the distinctive characteristics of open and closed models of functioning of the innovation sphere. Justified the use of interaction marketing approach to relationship management of innovation sphere. Two sets of marketing functions - network and process for the effective functioning of innovation networks. Given matrix scorecard marketing functions in the innovation network.
The Optimal Portfolio Selection Model under g-Expectation
Directory of Open Access Journals (Sweden)
Li Li
2014-01-01
complicated and sophisticated, the optimal solution turns out to be surprisingly simple, the payoff of a portfolio of two binary claims. Also I give the economic meaning of my model and the comparison with that one in the work of Jin and Zhou, 2008.
Characterizing QALYs under a General Rank Dependent Utility Model
H. Bleichrodt (Han); J. Quiggin (John)
1997-01-01
textabstractThis paper provides a characterization of QALYs, the most important outcome measure in medical decision making, in the context of a general rank dependent utility model. We show that both for chronic and for nonchronic health states the characterization of QALYs depends on intuitive
Women's Educational Experience under Colonialism: Toward a Diachronic Model.
Barthel, Diane
1985-01-01
Introduces a three-stage historical model of female education in Africa during and since the colonial period. Suggests an historical tendency to educate only males, then an attempt to educate a limited number of females for "modern" roles. Contemporary situation presents educational opportunities for more women, but with subtle sexism…
International Nuclear Information System (INIS)
Alonso, A.; Buron, J.M.; Fernandez, S.
1991-07-01
In this report, a kinetic model has been developed with the aim to try to reproduce the chemical phenomena that take place in a flowing system containing steam, hydrogen and iodine and caesium vapours. The work is divided into two different parts. The first part consists in the estimation, through the Activited Complex Theory, of the reaction rate constants, for the chosen reactions, and the development of the kinetic model based on the concept of ideal tubular chemical reactor. The second part deals with the application of such model to several cases, which were taken from the Phase B 'Scoping Calculations' of the Phebus-FP Project (sequence AB) and the SFD-ST and SFD1.1 experiments. The main conclusion obtained from this work is that the assumption of instantaneous equilibrium could be inacurrate in order to estimate the iodine and caesium species distribution under severe accidents conditions
Thermomechanics of damageable materials under diffusion: modelling and analysis
Roubíček, Tomáš; Tomassetti, Giuseppe
2015-12-01
We propose a thermodynamically consistent general-purpose model describing diffusion of a solute or a fluid in a solid undergoing possible phase transformations and damage, beside possible visco-inelastic processes. Also heat generation/consumption/transfer is considered. Damage is modelled as rate-independent. The applications include metal-hydrogen systems with metal/hydride phase transformation, poroelastic rocks, structural and ferro/para-magnetic phase transformation, water and heat transport in concrete, and if diffusion is neglected, plasticity with damage and viscoelasticity, etc. For the ensuing system of partial differential equations and inclusions, we prove existence of solutions by a carefully devised semi-implicit approximation scheme of the fractional-step type.
Mathematical Modeling of Intravascular Blood Coagulation under Wall Shear Stress
Rukhlenko, Oleksii S.; Dudchenko, Olga A.; Zlobina, Ksenia E.; Guria, Georgy Th.
2015-01-01
Increased shear stress such as observed at local stenosis may cause drastic changes in the permeability of the vessel wall to procoagulants and thus initiate intravascular blood coagulation. In this paper we suggest a mathematical model to investigate how shear stress-induced permeability influences the thrombogenic potential of atherosclerotic plaques. Numerical analysis of the model reveals the existence of two hydrodynamic thresholds for activation of blood coagulation in the system and unveils typical scenarios of thrombus formation. The dependence of blood coagulation development on the intensity of blood flow, as well as on geometrical parameters of atherosclerotic plaque is described. Relevant parametric diagrams are drawn. The results suggest a previously unrecognized role of relatively small plaques (resulting in less than 50% of the lumen area reduction) in atherothrombosis and have important implications for the existing stenting guidelines. PMID:26222505
A Novel Computer Virus Propagation Model under Security Classification
Directory of Open Access Journals (Sweden)
Qingyi Zhu
2017-01-01
Full Text Available In reality, some computers have specific security classification. For the sake of safety and cost, the security level of computers will be upgraded with increasing of threats in networks. Here we assume that there exists a threshold value which determines when countermeasures should be taken to level up the security of a fraction of computers with low security level. And in some specific realistic environments the propagation network can be regarded as fully interconnected. Inspired by these facts, this paper presents a novel computer virus dynamics model considering the impact brought by security classification in full interconnection network. By using the theory of dynamic stability, the existence of equilibria and stability conditions is analysed and proved. And the above optimal threshold value is given analytically. Then, some numerical experiments are made to justify the model. Besides, some discussions and antivirus measures are given.
Environmental problems indicator under environmental modeling toward sustainable development
P. Sutthichaimethee; W. Tanoamchard; P. Sawangwong; P Pachana; N. Witit-Anun
2015-01-01
This research aims to apply a model to the study and analysis of environmental and natural resource costs created in supply chains of goods and services produced in Thailand, and propose indicators for environmental problem management, caused by goods and services production, based on concepts of sustainable production and consumer behavior. The research showed that the highest environmental cost in terms of Natural Resource Materials was from pipelines and gas distribution, while the lowest ...
Integrated Modeling of Polymer Composites Under High Energy Laser Irradiation
2015-10-30
propagation constant. The top and bottom boundaries in Figure 3 are perfect electric conductors (PEC) which causes perfect reflection and simulates a semi...the FEA models were heated by passing a current through the fiber embedded in the dogbone. This is accomplished by placing a small amount of silver ...paint directly into the silicone mold. The paint is dabbed onto the ends of the fiber before the resin is added. After curing, the spot of silver paint
Koopmeiners, Joseph S; Hobbs, Brian P
2018-05-01
Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.
Maintenance cost models in deregulated power systems under opportunity costs
International Nuclear Information System (INIS)
Al-Arfaj, K.; Dahal, K.; Azaiez, M.N.
2007-01-01
In a centralized power system, the operator is responsible for scheduling maintenance. There are different types of maintenance, including corrective maintenance; predictive maintenance; preventive maintenance; and reliability-centred maintenance. The main cause of power failures is poor maintenance. As such, maintenance costs play a significant role in deregulated power systems. They include direct costs associated with material and labor costs as well as indirect costs associated with spare parts inventory, shipment, test equipment, indirect labor, opportunity costs and cost of failure. In maintenance scheduling and planning, the cost function is the only component of the objective function. This paper presented the results of a study in which different components of maintenance costs were modeled. The maintenance models were formulated as an optimization problem with single and multiple objectives and a set of constraints. The maintenance costs models could be used to schedule the maintenance activities of power generators more accurately and to identify the best maintenance strategies over a period of time as they consider failure and opportunity costs in a deregulated environment. 32 refs., 4 tabs., 4 figs
Modeling the Virtual Machine Launching Overhead under Fermicloud
Energy Technology Data Exchange (ETDEWEB)
Garzoglio, Gabriele [Fermilab; Wu, Hao [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Bernabeu, Gerard [Fermilab; Noh, Seo-Young [KISTI, Daejeon
2014-11-12
FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.
Crema, Enrico R.; Kandler, Anne; Shennan, Stephen
2016-12-01
A long tradition of cultural evolutionary studies has developed a rich repertoire of mathematical models of social learning. Early studies have laid the foundation of more recent endeavours to infer patterns of cultural transmission from observed frequencies of a variety of cultural data, from decorative motifs on potsherds to baby names and musical preferences. While this wide range of applications provides an opportunity for the development of generalisable analytical workflows, archaeological data present new questions and challenges that require further methodological and theoretical discussion. Here we examine the decorative motifs of Neolithic pottery from an archaeological assemblage in Western Germany, and argue that the widely used (and relatively undiscussed) assumption that observed frequencies are the result of a system in equilibrium conditions is unwarranted, and can lead to incorrect conclusions. We analyse our data with a simulation-based inferential framework that can overcome some of the intrinsic limitations in archaeological data, as well as handle both equilibrium conditions and instances where the mode of cultural transmission is time-variant. Results suggest that none of the models examined can produce the observed pattern under equilibrium conditions, and suggest. instead temporal shifts in the patterns of cultural transmission.
Stability of void lattices under irradiation: a kinetic model
International Nuclear Information System (INIS)
Benoist, P.; Martin, G.
1975-01-01
Voids are imbedded in a homogeneous medium where point defects are uniformly created and annihilated. As shown by a perturbation calculation, the proportion of the defects which are lost on the cavities goes through a maximum, when the voids are arranged on a translation lattice. If a void is displaced from its lattice site, its growth rate becomes anisotropic and is larger in the direction of the vacant site. The relative efficiency of BCC versus FCC void lattices for the capture of point defects is shown to depend on the relaxation length of the point defects in the surrounding medium. It is shown that the rate of energy dissipation in the crystal under irradiation is maximum when the voids are ordered on the appropriate lattice
Microcosm Experiments and Modeling of Microbial Movement Under Unsaturated Conditions
Energy Technology Data Exchange (ETDEWEB)
Brockman, F.J.; Kapadia, N.; Williams, G.; Rockhold, M.
2006-04-05
Colonization of bacteria in porous media has been studied primarily in saturated systems. In this study we examine how microbial colonization in unsaturated porous media is controlled by water content and particle size. This is important for understanding the feasibility and success of bioremediation via nutrient delivery when contaminant degraders are at low densities and when total microbial populations are sparse and spatially discontinuous. The study design used 4 different sand sizes, each at 4 different water contents; experiments were run with and without acetate as the sole carbon source. All experiments were run in duplicate columns and used the motile organism Pseudomonas stutzeri strain KC, a carbon tetrachloride degrader. At a given sand size, bacteria traveled further with increasing volumetric water content. At a given volumetric water content, bacteria generally traveled further with increasing sand size. Water redistribution, solute transport, gas diffusion, and bacterial colonization dynamics were simulated using a numerical finite-difference model. Solute and bacterial transport were modeled using advection-dispersion equations, with reaction rate source/sink terms to account for bacterial growth and substrate utilization, represented using dual Monod-type kinetics. Oxygen transport and diffusion was modeled accounting for equilibrium partitioning between the aqueous and gas phases. The movement of bacteria in the aqueous phase was modeled using a linear impedance model in which the term D{sub m} is a coefficient, as used by Barton and Ford (1995), representing random motility. The unsaturated random motility coefficients we obtained (1.4 x 10{sup -6} to 2.8 x 10{sup -5} cm{sup 2}/sec) are in the same range as those found by others for saturated systems (3.5 x 10{sup -6} to 3.5 x 10{sup -5} cm{sup 2}/sec). The results show that some bacteria can rapidly migrate in well sorted unsaturated sands (and perhaps in relatively high porosity, poorly
Bistable dynamics underlying excitability of ion homeostasis in neuron models.
Directory of Open Access Journals (Sweden)
Niklas Hübel
2014-05-01
Full Text Available When neurons fire action potentials, dissipation of free energy is usually not directly considered, because the change in free energy is often negligible compared to the immense reservoir stored in neural transmembrane ion gradients and the long-term energy requirements are met through chemical energy, i.e., metabolism. However, these gradients can temporarily nearly vanish in neurological diseases, such as migraine and stroke, and in traumatic brain injury from concussions to severe injuries. We study biophysical neuron models based on the Hodgkin-Huxley (HH formalism extended to include time-dependent ion concentrations inside and outside the cell and metabolic energy-driven pumps. We reveal the basic mechanism of a state of free energy-starvation (FES with bifurcation analyses showing that ion dynamics is for a large range of pump rates bistable without contact to an ion bath. This is interpreted as a threshold reduction of a new fundamental mechanism of ionic excitability that causes a long-lasting but transient FES as observed in pathological states. We can in particular conclude that a coupling of extracellular ion concentrations to a large glial-vascular bath can take a role as an inhibitory mechanism crucial in ion homeostasis, while the Na⁺/K⁺ pumps alone are insufficient to recover from FES. Our results provide the missing link between the HH formalism and activator-inhibitor models that have been successfully used for modeling migraine phenotypes, and therefore will allow us to validate the hypothesis that migraine symptoms are explained by disturbed function in ion channel subunits, Na⁺/K⁺ pumps, and other proteins that regulate ion homeostasis.
Price level versus inflation targeting under model uncertainty
Cateau, Gino
2008-01-01
The purpose of this paper is to make a quantitative contribution to the inflation versus price level targeting debate. It considers a policy-maker that can set policy either through an inflation targeting rule or a price level targeting rule to minimize a quadratic loss function using the actual projection model of the Bank of Canada (ToTEM). The paper finds that price level targeting dominates inflation targeting, although it can lead to much more volatile inflation depending on the weight a...
Financial Transaction Tax: Determination of Economic Impact Under DSGE Model
Directory of Open Access Journals (Sweden)
Veronika Solilová
2015-01-01
Full Text Available The discussion about the possible taxation of the financial sector has started in the European Union as a result of the financial crisis which has spread to the Europe from the United States in 2008 and consequently of the massive financial interventions by governments made in favour of the financial sector. On 14 February 2013, after rejection of the draft of the directive introducing a common system of financial transaction tax in 2011, the European Commission introduced the financial transaction tax through enhanced cooperation. The aim of the paper is to research economic impact of financial transaction tax on EU (EU27 or EU11 with respect to the DSGE model which was used for the determination of impacts. Based on our analysis the DSGE model can be considered as underestimated in case of the impact on economic growth and an overestimated in case of the revenue collection. Particularly, the overall impact of the financial transaction tax considering cascade effects of securities (tax rate 2.2% and derivatives (tax rate 0.2% is ranged between −4.752 and 1.472 percent points of GDP. And further, is assumed that the relocation effects of business/trade can be in average 40% causes a decline of expected tax revenues in the amount of 13bn EUR. Thus, at a time of fragile economic growth across the EU and the increased risk of recession in Europe, the introduction of the FTT should be undesirable.
A thermal model for photovoltaic panels under varying atmospheric conditions
International Nuclear Information System (INIS)
Armstrong, S.; Hurley, W.G.
2010-01-01
The response of the photovoltaic (PV) panel temperature is dynamic with respect to the changes in the incoming solar radiation. During periods of rapidly changing conditions, a steady state model of the operating temperature cannot be justified because the response time of the PV panel temperature becomes significant due to its large thermal mass. Therefore, it is of interest to determine the thermal response time of the PV panel. Previous attempts to determine the thermal response time have used indoor measurements, controlling the wind flow over the surface of the panel with fans or conducting the experiments in darkness to avoid radiative heat loss effects. In real operating conditions, the effective PV panel temperature is subjected to randomly varying ambient temperature and fluctuating wind speeds and directions; parameters that are not replicated in controlled, indoor experiments. A new thermal model is proposed that incorporates atmospheric conditions; effects of PV panel material composition and mounting structure. Experimental results are presented which verify the thermal behaviour of a photovoltaic panel for low to strong winds.
Modelling of nectarine drying under near infrared - Vacuum conditions.
Alaei, Behnam; Chayjan, Reza Amiri
2015-01-01
Drying of nectarine slices was performed to determine the thermal and physical properties in order to reduce product deterioration due to chemical reactions, facilitate storage and lower transportation costs. Because nectarine slices are sensitive to heat with long drying period, the selection of a suitable drying approach is a challenging task. Infrared-vacuum drying can be used as an appropriate method for susceptible materials with high moisture content such as nectarine slices. Modelling of nectarine slices drying was carried out in a thin layer near infraredvacuum conditions. Drying of the samples was implemented at the absolute pressures of 20, 40 and 60 kPa and drying temperatures of 50, 60 and 70°C. Drying behaviour of nectarine slices, as well as the effect of drying conditions on moisture loss trend, drying rate, effective diffusion coefficient, activation energy, shrinkage, colour and energy consumption of nectarine slices, dried in near infrared-vacuum dryer are discussed in this study. Six mathematical models were used to predict the moisture ratio of the samples in thin layer drying. The Midilli model had supremacy in prediction of nectarine slices drying behaviour. The maximum drying rates of the samples were between 0.014-0.047 gwater/gdry material·min. Effective moisture diffusivity of the samples was estimated in the ranges of 2.46·10-10 to 6.48·10-10 m2/s. Activation energy were computed between 31.28 and 35.23 kJ/mol. Minimum shrinkage (48.4%) and total colour difference (15.1) were achieved at temperature of 50°C and absolute pressure of 20 kPa. Energy consumption of the tests was estimated in the ranges of 0.129 to 0.247 kWh. Effective moisture diffusivity was increased with decrease of vacuum pressure and increase of drying temperature but effect of drying temperature on effective moisture diffusivity of nectarine slices was more than vacuum pressure. Activation energy was decreased with decrease in absolute pressure. Total colour
GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data
Energy Technology Data Exchange (ETDEWEB)
Collin, Blaise Paul [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read
Developing Physiologic Models for Emergency Medical Procedures Under Microgravity
Parker, Nigel; O'Quinn, Veronica
2012-01-01
Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI s patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.
Modeling of fracture of protective concrete structures under impact loads
Radchenko, P. A.; Batuev, S. P.; Radchenko, A. V.; Plevkov, V. S.
2015-10-01
This paper presents results of numerical simulation of interaction between a Boeing 747-400 aircraft and the protective shell of a nuclear power plant. The shell is presented as a complex multilayered cellular structure consisting of layers of concrete and fiber concrete bonded with steel trusses. Numerical simulation was performed three-dimensionally using the original algorithm and software taking into account algorithms for building grids of complex geometric objects and parallel computations. Dynamics of the stress-strain state and fracture of the structure were studied. Destruction is described using a two-stage model that allows taking into account anisotropy of elastic and strength properties of concrete and fiber concrete. It is shown that wave processes initiate destruction of the cellular shell structure; cells start to destruct in an unloading wave originating after the compression wave arrival at free cell surfaces.