WorldWideScience

Sample records for modeling approaches assumptions

  1. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  2. The stable model semantics under the any-world assumption

    OpenAIRE

    Straccia, Umberto; Loyer, Yann

    2004-01-01

    The stable model semantics has become a dominating approach to complete the knowledge provided by a logic program by means of the Closed World Assumption (CWA). The CWA asserts that any atom whose truth-value cannot be inferred from the facts and rules is supposed to be false. This assumption is orthogonal to the so-called the Open World Assumption (OWA), which asserts that every such atom's truth is supposed to be unknown. The topic of this paper is to be more fine-grained. Indeed, the objec...

  3. Assumption-versus data-based approaches to summarizing species' ranges.

    Science.gov (United States)

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2018-06-01

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  4. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  5. Contemporary assumptions on human nature and work and approach to human potential managing

    Directory of Open Access Journals (Sweden)

    Vujić Dobrila

    2006-01-01

    Full Text Available A general problem of this research is to identify if there is a relationship between the assumption on human nature and work (Mcgregor, Argyris, Schein, Steers and Porter and a general organizational model preference, as well as a mechanism of human resource management? This research was carried out in 2005/2006. The sample consisted of 317 subjects (197 managers, 105 highly educated subordinates and 15 entrepreneurs in 7 big enterprises in a group of small business enterprises differentiating in terms of the entrepreneur’s structure and a type of activity. A general hypothesis "that assumptions on human nature and work are statistically significant in connection to the preference approach (models, of work motivation commitment", has been confirmed. A specific hypothesis have been also confirmed: ·The assumptions on a human as a rational economic being are statistically significant in correlation with only two mechanisms of traditional models, the mechanism of method work control and the working discipline mechanism. ·Statistically significant assumptions on a human as a social being are correlated with all mechanisms of engaging employees, which belong to the model of the human relations, except the mechanism introducing the adequate type of prizes for all employees independently of working results. ·The assumptions on a human as a creative being are statistically significant, positively correlating with preference of two mechanisms belonging to the human resource model by investing into education and training and making conditions for the application of knowledge and skills. The young with assumptions on a human as a creative being prefer much broader repertoire of mechanisms belonging to the human resources model from the remaining category of subjects in the pattern. The connection between the assumption on human nature and preference models of engaging appears especially in the sub-pattern of managers, in the category of young subjects

  6. Incorporating assumption deviation risk in quantitative risk assessments: A semi-quantitative approach

    International Nuclear Information System (INIS)

    Khorsandi, Jahon; Aven, Terje

    2017-01-01

    Quantitative risk assessments (QRAs) of complex engineering systems are based on numerous assumptions and expert judgments, as there is limited information available for supporting the analysis. In addition to sensitivity analyses, the concept of assumption deviation risk has been suggested as a means for explicitly considering the risk related to inaccuracies and deviations in the assumptions, which can significantly impact the results of the QRAs. However, challenges remain for its practical implementation, considering the number of assumptions and magnitude of deviations to be considered. This paper presents an approach for integrating an assumption deviation risk analysis as part of QRAs. The approach begins with identifying the safety objectives for which the QRA aims to support, and then identifies critical assumptions with respect to ensuring the objectives are met. Key issues addressed include the deviations required to violate the safety objectives, the uncertainties related to the occurrence of such events, and the strength of knowledge supporting the assessments. Three levels of assumptions are considered, which include assumptions related to the system's structural and operational characteristics, the effectiveness of the established barriers, as well as the consequence analysis process. The approach is illustrated for the case of an offshore installation. - Highlights: • An approach for assessing the risk of deviations in QRA assumptions is presented. • Critical deviations and uncertainties related to their occurrence are addressed. • The analysis promotes critical thinking about the foundation and results of QRAs. • The approach is illustrated for the case of an offshore installation.

  7. The sufficiency assumption of the reasoned approach to action

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2015-12-01

    Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.

  8. A simulation study to compare three self-controlled case series approaches: correction for violation of assumption and evaluation of bias.

    Science.gov (United States)

    Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J

    2013-08-01

    The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Occupancy estimation and the closure assumption

    Science.gov (United States)

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing

  11. Super learning to hedge against incorrect inference from arbitrary parametric assumptions in marginal structural modeling.

    Science.gov (United States)

    Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J

    2013-08-01

    Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Modelling sexual transmission of HIV: testing the assumptions, validating the predictions

    Science.gov (United States)

    Baggaley, Rebecca F.; Fraser, Christophe

    2010-01-01

    Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600

  13. Assumptions behind size-based ecosystem models are realistic

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.

    2016-01-01

    A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed by Fro...... that there is indeed a constructive role for a wide suite of ecosystem models to evaluate fishing strategies in an ecosystem context...

  14. Models for waste life cycle assessment: Review of technical assumptions

    DEFF Research Database (Denmark)

    Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky

    2010-01-01

    A number of waste life cycle assessment (LCA) models have been gradually developed since the early 1990s, in a number of countries, usually independently from each other. Large discrepancies in results have been observed among different waste LCA models, although it has also been shown that results...... from different LCA studies can be consistent. This paper is an attempt to identify, review and analyse methodologies and technical assumptions used in various parts of selected waste LCA models. Several criteria were identified, which could have significant impacts on the results......, such as the functional unit, system boundaries, waste composition and energy modelling. The modelling assumptions of waste management processes, ranging from collection, transportation, intermediate facilities, recycling, thermal treatment, biological treatment, and landfilling, are obviously critical when comparing...

  15. Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption

    Directory of Open Access Journals (Sweden)

    Zheping Yan

    2014-01-01

    Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.

  16. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  17. Testing the basic assumption of the hydrogeomorphic approach to assessing wetland functions.

    Science.gov (United States)

    Hruby, T

    2001-05-01

    The hydrogeomorphic (HGM) approach for developing "rapid" wetland function assessment methods stipulates that the variables used are to be scaled based on data collected at sites judged to be the best at performing the wetland functions (reference standard sites). A critical step in the process is to choose the least altered wetlands in a hydrogeomorphic subclass to use as a reference standard against which other wetlands are compared. The basic assumption made in this approach is that wetlands judged to have had the least human impact have the highest level of sustainable performance for all functions. The levels at which functions are performed in these least altered wetlands are assumed to be "characteristic" for the subclass and "sustainable." Results from data collected in wetlands in the lowlands of western Washington suggest that the assumption may not be appropriate for this region. Teams developing methods for assessing wetland functions did not find that the least altered wetlands in a subclass had a range of performance levels that could be identified as "characteristic" or "sustainable." Forty-four wetlands in four hydrogeomorphic subclasses (two depressional subclasses and two riverine subclasses) were rated by teams of experts on the severity of their human alterations and on the level of performance of 15 wetland functions. An ordinal scale of 1-5 was used to quantify alterations in water regime, soils, vegetation, buffers, and contributing basin. Performance of functions was judged on an ordinal scale of 1-7. Relatively unaltered wetlands were judged to perform individual functions at levels that spanned all of the seven possible ratings in all four subclasses. The basic assumption of the HGM approach, that the least altered wetlands represent "characteristic" and "sustainable" levels of functioning that are different from those found in altered wetlands, was not confirmed. Although the intent of the HGM approach is to use level of functioning as a

  18. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  19. Do unreal assumptions pervert behaviour?

    DEFF Research Database (Denmark)

    Petersen, Verner C.

    of the basic assumptions underlying the theories found in economics. Assumptions relating to the primacy of self-interest, to resourceful, evaluative, maximising models of man, to incentive systems and to agency theory. The major part of the paper then discusses how these assumptions and theories may pervert......-interested way nothing will. The purpose of this paper is to take a critical look at some of the assumptions and theories found in economics and discuss their implications for the models and the practices found in the management of business. The expectation is that the unrealistic assumptions of economics have...... become taken for granted and tacitly included into theories and models of management. Guiding business and manage¬ment to behave in a fashion that apparently makes these assumptions become "true". Thus in fact making theories and models become self-fulfilling prophecies. The paper elucidates some...

  20. Vocational Didactics: Core Assumptions and Approaches from Denmark, Germany, Norway, Spain and Sweden

    Science.gov (United States)

    Gessler, Michael; Moreno Herrera, Lázaro

    2015-01-01

    The design of vocational didactics has to meet special requirements. Six core assumptions are identified: outcome orientation, cultural-historical embedding, horizontal structure, vertical structure, temporal structure, and the changing nature of work. Different approaches and discussions from school-based systems (Spain and Sweden) and dual…

  1. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  2. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption...

  3. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  4. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid

  5. Effect of grid resolution and subgrid assumptions on the model prediction of a reactive buoyant plume under convective conditions

    International Nuclear Information System (INIS)

    Chock, D.P.; Winkler, S.L.; Pu Sun

    2002-01-01

    We have introduced a new and elaborate approach to understand the impact of grid resolution and subgrid chemistry assumption on the grid-model prediction of species concentrations for a system with highly non-homogeneous chemistry - a reactive buoyant plume immediately downwind of the stack in a convective boundary layer. The Parcel-Grid approach plume was used to describe both the air parcel turbulent transport and chemistry. This approach allows an identical transport process for all simulations. It also allows a description of subgrid chemistry. The ambient and plume parcel transport follows the description of Luhar and Britter (Atmos. Environ, 23 (1989) 1911, 26A (1992) 1283). The chemistry follows that of the Carbon-Bond mechanism. Three different grid sizes were considered: fine, medium and coarse, together with three different subgrid chemistry assumptions: micro-scale or individual parcel, tagged-parcel (plume and ambient parcels treated separately), and untagged-parcel (plume and ambient parcels treated indiscriminately). Reducing the subgrid information is not necessarily similar to increasing the model grid size. In our example, increasing the grid size leads to a reduction in the suppression of ozone in the presence of a high-NO x stack plume, and a reduction in the effectiveness of the NO x -inhibition effect. On the other hand, reducing the subgrid information (by using the untagged-parcel assumption) leads to an increase in ozone reduction and an enhancement of the NO x -inhibition effect insofar as the ozone extremum is concerned. (author)

  6. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    Science.gov (United States)

    Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.

    2017-02-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA

  7. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    Science.gov (United States)

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  8. Monitoring Assumptions in Assume-Guarantee Contracts

    Directory of Open Access Journals (Sweden)

    Oleg Sokolsky

    2016-05-01

    Full Text Available Pre-deployment verification of software components with respect to behavioral specifications in the assume-guarantee form does not, in general, guarantee absence of errors at run time. This is because assumptions about the environment cannot be discharged until the environment is fixed. An intuitive approach is to complement pre-deployment verification of guarantees, up to the assumptions, with post-deployment monitoring of environment behavior to check that the assumptions are satisfied at run time. Such a monitor is typically implemented by instrumenting the application code of the component. An additional challenge for the monitoring step is that environment behaviors are typically obtained through an I/O library, which may alter the component's view of the input format. This transformation requires us to introduce a second pre-deployment verification step to ensure that alarms raised by the monitor would indeed correspond to violations of the environment assumptions. In this paper, we describe an approach for constructing monitors and verifying them against the component assumption. We also discuss limitations of instrumentation-based monitoring and potential ways to overcome it.

  9. Wrong assumptions in the financial crisis

    NARCIS (Netherlands)

    Aalbers, M.B.

    2009-01-01

    Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to

  10. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  11. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  12. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  13. Allele Age Under Non-Classical Assumptions is Clarified by an Exact Computational Markov Chain Approach.

    Science.gov (United States)

    De Sanctis, Bianca; Krukov, Ivan; de Koning, A P Jason

    2017-09-19

    Determination of the age of an allele based on its population frequency is a well-studied problem in population genetics, for which a variety of approximations have been proposed. We present a new result that, surprisingly, allows the expectation and variance of allele age to be computed exactly (within machine precision) for any finite absorbing Markov chain model in a matter of seconds. This approach makes none of the classical assumptions (e.g., weak selection, reversibility, infinite sites), exploits modern sparse linear algebra techniques, integrates over all sample paths, and is rapidly computable for Wright-Fisher populations up to N e  = 100,000. With this approach, we study the joint effect of recurrent mutation, dominance, and selection, and demonstrate new examples of "selective strolls" where the classical symmetry of allele age with respect to selection is violated by weakly selected alleles that are older than neutral alleles at the same frequency. We also show evidence for a strong age imbalance, where rare deleterious alleles are expected to be substantially older than advantageous alleles observed at the same frequency when population-scaled mutation rates are large. These results highlight the under-appreciated utility of computational methods for the direct analysis of Markov chain models in population genetics.

  14. A Discrete Monetary Economic Growth Model with the MIU Approach

    Directory of Open Access Journals (Sweden)

    Wei-Bin Zhang

    2008-01-01

    Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.

  15. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. The biosphere at Forsmark. Data, assumptions and models used in the SR-Can assessment

    International Nuclear Information System (INIS)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern

    2006-10-01

    This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. The parameters are topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary, e.g. collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis. The report presents descriptions and estimates not presented elsewhere, as well as summaries of important steps in the biosphere modelling that are presented in more detail in separate reports. The intention is to give the reader a coherent description of the steps taken to calculate doses to biota and humans, including a description of the data used, the rationale for a number of assumptions made during parameterisation, and of how the landscape context is applied in the modelling, and also to present the models used and the results obtained

  17. The biosphere at Forsmark. Data, assumptions and models used in the SR-Can assessment

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern [eds.

    2006-10-15

    This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. The parameters are topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary, e.g. collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis. The report presents descriptions and estimates not presented elsewhere, as well as summaries of important steps in the biosphere modelling that are presented in more detail in separate reports. The intention is to give the reader a coherent description of the steps taken to calculate doses to biota and humans, including a description of the data used, the rationale for a number of assumptions made during parameterisation, and of how the landscape context is applied in the modelling, and also to present the models used and the results obtained.

  18. Formalization and analysis of reasoning by assumption.

    Science.gov (United States)

    Bosse, Tibor; Jonker, Catholijn M; Treur, Jan

    2006-01-02

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been specified, some of which are considered characteristic for the reasoning pattern, whereas some other properties can be used to discriminate among different approaches to the reasoning. These properties have been automatically checked for the traces acquired in experiments undertaken. The approach turned out to be beneficial from two perspectives. First, checking characteristic properties contributes to the empirical validation of a theory on reasoning by assumption. Second, checking discriminating properties allows the analyst to identify different classes of human reasoners. 2006 Lawrence Erlbaum Associates, Inc.

  19. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    Rose, Jeremy; Aaen, Ivan; Nielsen, Peter Axel

    2008-01-01

    Thinking about improving the management of software development in software firms is dominated by one approach: the capability maturity model devised and administered at the Software Engineering Institute at Carnegie Mellon University. Though CMM, and its replacement CMMI are widely known and used...... thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...

  20. On the Empirical Importance of the Conditional Skewness Assumption in Modelling the Relationship between Risk and Return

    Science.gov (United States)

    Pipień, M.

    2008-09-01

    We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.

  1. The relevance of ''theory rich'' bridge assumptions

    NARCIS (Netherlands)

    Lindenberg, S

    1996-01-01

    Actor models are increasingly being used as a form of theory building in sociology because they can better represent the caul mechanisms that connect macro variables. However, actor models need additional assumptions, especially so-called bridge assumptions, for filling in the relatively empty

  2. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...

  3. Assessing the skill of hydrology models at simulating the water cycle in the HJ Andrews LTER: Assumptions, strengths and weaknesses

    Science.gov (United States)

    Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...

  4. On the assumption of vanishing temperature fluctuations at the wall for heat transfer modeling

    Science.gov (United States)

    Sommer, T. P.; So, R. M. C.; Zhang, H. S.

    1993-01-01

    Boundary conditions for fluctuating wall temperature are required for near-wall heat transfer modeling. However, their correct specifications for arbitrary thermal boundary conditions are not clear. The conventional approach is to assume zero fluctuating wall temperature or zero gradient for the temperature variance at the wall. These are idealized specifications and the latter condition could lead to an ill posed problem for fully-developed pipe and channel flows. In this paper, the validity and extent of the zero fluctuating wall temperature condition for heat transfer calculations is examined. The approach taken is to assume a Taylor expansion in the wall normal coordinate for the fluctuating temperature that is general enough to account for both zero and non-zero value at the wall. Turbulent conductivity is calculated from the temperature variance and its dissipation rate. Heat transfer calculations assuming both zero and non-zero fluctuating wall temperature reveal that the zero fluctuating wall temperature assumption is in general valid. The effects of non-zero fluctuating wall temperature are limited only to a very small region near the wall.

  5. On the derivation of approximations to cellular automata models and the assumption of independence.

    Science.gov (United States)

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. IRT models with relaxed assumptions in eRm: A manual-like instruction

    Directory of Open Access Journals (Sweden)

    REINHOLD HATZINGER

    2009-03-01

    Full Text Available Linear logistic models with relaxed assumptions (LLRA as introduced by Fischer (1974 are a flexible tool for the measurement of change for dichotomous or polytomous responses. As opposed to the Rasch model, assumptions on dimensionality of items, their mutual dependencies and the distribution of the latent trait in the population of subjects are relaxed. Conditional maximum likelihood estimation allows for inference about treatment, covariate or trend effect parameters without taking the subjects' latent trait values into account. In this paper we will show how LLRAs based on the LLTM, LRSM and LPCM can be used to answer various questions about the measurement of change and how they can be fitted in R using the eRm package. A number of small didactic examples is provided that can easily be used as templates for real data sets. All datafiles used in this paper are available from http://eRm.R-Forge.R-project.org/

  7. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    Science.gov (United States)

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The biosphere at Laxemar. Data, assumptions and models used in the SR-Can assessment

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern [eds.

    2006-10-15

    This is essentially a compilation of a variety of reports concerning the site investigations, the research activities and information derived from other sources important for the safety assessment. The main objective is to present prerequisites, methods and data used, in the biosphere modelling for the safety assessment SR-Can at the Laxemar site. A major part of the report focuses on how site-specific data are used, recalculated or modified in order to be applicable in the safety assessment context; and the methods and sub-models that are the basis for the biosphere modelling. Furthermore, the assumptions made as to the future states of surface ecosystems are mainly presented in this report. A similar report is provided for the Forsmark area. This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. A central tool in the work is the description of the topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary in collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis.

  9. The biosphere at Laxemar. Data, assumptions and models used in the SR-Can assessment

    International Nuclear Information System (INIS)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern

    2006-10-01

    This is essentially a compilation of a variety of reports concerning the site investigations, the research activities and information derived from other sources important for the safety assessment. The main objective is to present prerequisites, methods and data used, in the biosphere modelling for the safety assessment SR-Can at the Laxemar site. A major part of the report focuses on how site-specific data are used, recalculated or modified in order to be applicable in the safety assessment context; and the methods and sub-models that are the basis for the biosphere modelling. Furthermore, the assumptions made as to the future states of surface ecosystems are mainly presented in this report. A similar report is provided for the Forsmark area. This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. A central tool in the work is the description of the topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary in collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis

  10. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    Science.gov (United States)

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  11. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus

    Directory of Open Access Journals (Sweden)

    Constantinos Taliotis

    2017-10-01

    Full Text Available The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  12. Petroacoustic Modelling of Heterolithic Sandstone Reservoirs: A Novel Approach to Gassmann Modelling Incorporating Sedimentological Constraints and NMR Porosity data

    Science.gov (United States)

    Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.

    2012-12-01

    Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which

  13. Probabilistic modelling in urban drainage – two approaches that explicitly account for temporal variation of model errors

    DEFF Research Database (Denmark)

    Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen

    of input uncertainties observed in the models. The explicit inclusion of such variations in the modelling process will lead to a better fulfilment of the assumptions made in formal statistical frameworks, thus reducing the need to resolve to informal methods. The two approaches presented here...

  14. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  15. A novel approach for modelling complex maintenance systems using discrete event simulation

    International Nuclear Information System (INIS)

    Alrabghi, Abdullah; Tiwari, Ashutosh

    2016-01-01

    Existing approaches for modelling maintenance rely on oversimplified assumptions which prevent them from reflecting the complexity found in industrial systems. In this paper, we propose a novel approach that enables the modelling of non-identical multi-unit systems without restrictive assumptions on the number of units or their maintenance characteristics. Modelling complex interactions between maintenance strategies and their effects on assets in the system is achieved by accessing event queues in Discrete Event Simulation (DES). The approach utilises the wide success DES has achieved in manufacturing by allowing integration with models that are closely related to maintenance such as production and spare parts systems. Additional advantages of using DES include rapid modelling and visual interactive simulation. The proposed approach is demonstrated in a simulation based optimisation study of a published case. The current research is one of the first to optimise maintenance strategies simultaneously with their parameters while considering production dynamics and spare parts management. The findings of this research provide insights for non-conflicting objectives in maintenance systems. In addition, the proposed approach can be used to facilitate the simulation and optimisation of industrial maintenance systems. - Highlights: • This research is one of the first to optimise maintenance strategies simultaneously. • New insights for non-conflicting objectives in maintenance systems. • The approach can be used to optimise industrial maintenance systems.

  16. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  17. Formalization and Analysis of Reasoning by Assumption

    OpenAIRE

    Bosse, T.; Jonker, C.M.; Treur, J.

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been speci...

  18. I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work

    Science.gov (United States)

    Horodyskyj, L.; Mead, C.; Anbar, A. D.

    2016-12-01

    Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.

  19. Multiverse Assumptions and Philosophy

    Directory of Open Access Journals (Sweden)

    James R. Johnson

    2018-02-01

    Full Text Available Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong topics such as: infinity, duplicate yous, hypothetical fields, more than three space dimensions, Hilbert space, advanced civilizations, and reality established by mathematical relationships. It is easy to confuse multiverse proposals because many divergent models exist. This overview defines the characteristics of eleven popular multiverse proposals. The characteristics compared are: initial conditions, values of constants, laws of nature, number of space dimensions, number of universes, and fine tuning explanations. Future scientific experiments may validate selected assumptions; but until they do, proposals by philosophers may be as valid as theoretical scientific theories.

  20. A quantitative evaluation of a qualitative risk assessment framework: Examining the assumptions and predictions of the Productivity Susceptibility Analysis (PSA)

    Science.gov (United States)

    2018-01-01

    Qualitative risk assessment frameworks, such as the Productivity Susceptibility Analysis (PSA), have been developed to rapidly evaluate the risks of fishing to marine populations and prioritize management and research among species. Despite being applied to over 1,000 fish populations, and an ongoing debate about the most appropriate method to convert biological and fishery characteristics into an overall measure of risk, the assumptions and predictive capacity of these approaches have not been evaluated. Several interpretations of the PSA were mapped to a conventional age-structured fisheries dynamics model to evaluate the performance of the approach under a range of assumptions regarding exploitation rates and measures of biological risk. The results demonstrate that the underlying assumptions of these qualitative risk-based approaches are inappropriate, and the expected performance is poor for a wide range of conditions. The information required to score a fishery using a PSA-type approach is comparable to that required to populate an operating model and evaluating the population dynamics within a simulation framework. In addition to providing a more credible characterization of complex system dynamics, the operating model approach is transparent, reproducible and can evaluate alternative management strategies over a range of plausible hypotheses for the system. PMID:29856869

  1. The incompressibility assumption in computational simulations of nasal airflow.

    Science.gov (United States)

    Cal, Ismael R; Cercos-Pita, Jose Luis; Duque, Daniel

    2017-06-01

    Most of the computational works on nasal airflow up to date have assumed incompressibility, given the low Mach number of these flows. However, for high temperature gradients, the incompressibility assumption could lead to a loss of accuracy, due to the temperature dependence of air density and viscosity. In this article we aim to shed some light on the influence of this assumption in a model of calm breathing in an Asian nasal cavity, by solving the fluid flow equations in compressible and incompressible formulation for different ambient air temperatures using the OpenFOAM package. At low flow rates and warm climatological conditions, similar results were obtained from both approaches, showing that density variations need not be taken into account to obtain a good prediction of all flow features, at least for usual breathing conditions. This agrees with most of the simulations previously reported, at least as far as the incompressibility assumption is concerned. However, parameters like nasal resistance and wall shear stress distribution differ for air temperatures below [Formula: see text]C approximately. Therefore, density variations should be considered for simulations at such low temperatures.

  2. Capturing Assumptions while Designing a Verification Model for Embedded Systems

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    A formal proof of a system correctness typically holds under a number of assumptions. Leaving them implicit raises the chance of using the system in a context that violates some assumptions, which in return may invalidate the correctness proof. The goal of this paper is to show how combining

  3. A critical evaluation of the local-equilibrium assumption in modeling NAPL-pool dissolution

    Science.gov (United States)

    Seagren, Eric A.; Rittmann, Bruce E.; Valocchi, Albert J.

    1999-07-01

    An analytical modeling analysis was used to assess when local equilibrium (LE) and nonequilibrium (NE) modeling approaches may be appropriate for describing nonaqueous-phase liquid (NAPL) pool dissolution. NE mass-transfer between NAPL pools and groundwater is expected to affect the dissolution flux under conditions corresponding to values of Sh'St (the modified Sherwood number ( Lxkl/ Dz) multiplied by the Stanton number ( kl/ vx))≈400, the NE and LE solutions converge, and the LE assumption is appropriate. Based on typical groundwater conditions, many cases of interest are expected to fall in this range. The parameter with the greatest impact on Sh'St is kl. The NAPL pool mass-transfer coefficient correlation of Pfannkuch [Pfannkuch, H.-O., 1984. Determination of the contaminant source strength from mass exchange processes at the petroleum-ground-water interface in shallow aquifer systems. In: Proceedings of the NWWA/API Conference on Petroleum Hydrocarbons and Organic Chemicals in Ground Water—Prevention, Detection, and Restoration, Houston, TX. Natl. Water Well Assoc., Worthington, OH, Nov. 1984, pp. 111-129.] was evaluated using the toluene pool data from Seagren et al. [Seagren, E.A., Rittmann, B.E., Valocchi, A.J., 1998. An experimental investigation of NAPL-pool dissolution enhancement by flushing. J. Contam. Hydrol., accepted.]. Dissolution flux predictions made with kl calculated using the Pfannkuch correlation were similar to the LE model predictions, and deviated systematically from predictions made using the average overall kl=4.76 m/day estimated by Seagren et al. [Seagren, E.A., Rittmann, B.E., Valocchi, A.J., 1998. An experimental investigation of NAPL-pool dissolution enhancement by flushing. J. Contam. Hydrol., accepted.] and from the experimental data for vx>18 m/day. The Pfannkuch correlation kl was too large for vx>≈10 m/day, possibly because of the relatively low Peclet number data used by Pfannkuch [Pfannkuch, H.-O., 1984. Determination

  4. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  5. Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics

    Science.gov (United States)

    García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team

    2016-06-01

    We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey

  6. Cement/clay interactions: feedback on the increasing complexity of modeling assumptions

    International Nuclear Information System (INIS)

    Marty, Nicolas C.M.; Gaucher, Eric C.; Tournassat, Christophe; Gaboreau, Stephane; Vong, Chan Quang; Claret, F.; Munier, Isabelle; Cochepin, Benoit

    2012-01-01

    Document available in extended abstract form only. Cementitious materials will be widely used in French concept of radioactive waste repositories. During their degradation over time, in contact with geological pore water, they will release hyper-alkaline fluids rich in calcium and alkaline cations. This chemical gradient likely to develop at the cement/clay interfaces will induce geochemical transformations. The first simplified calculations based mainly on simple mass balance calculation led to a very pessimistic understanding of the real expansion mechanism of the alkaline plume. However, geochemical and migration processes are much more complex because of the dissolution of the barrier's accessory phases and the precipitation of secondary minerals. To describe and to understand this complexity, coupled geochemistry and transport calculations are a useful and a mandatory tool. Furthermore, such sets of modeling when properly calibrated on experimental results are able to give insights on larger time scale unreachable with experiments. Since approximately 20 years, numerous papers have described the results of reactive transport modeling of cement/clay interactions with various numerical assumptions. For example, some authors selected a purely thermodynamic approach while others preferred a coupled thermodynamic/kinetic approach. Unfortunately, most of these studies used different and not comparable parameters as space discretization, initial and boundary conditions, thermodynamic databases, clayey and cementitious materials, etc... This study revisits the types of simulations proposed in the past to represent the effect of an alkaline perturbation with regard to the degree of complexity that was considered. The main goal of the study is to perform simulations with a consistent set of data and an increasing complexity. In doing so, the analysis of numerical results will give a clear vision of key parameters driving the expansion of alteration fronts and

  7. Towards representing human behavior and decision making in Earth system models - an overview of techniques and approaches

    Science.gov (United States)

    Müller-Hansen, Finn; Schlüter, Maja; Mäs, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst

    2017-11-01

    Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their goals, behavioral options, and decision rules, as well as modeling decisions regarding human social interactions and the aggregation of individuals' behavior. Here, we review existing modeling approaches and techniques from various disciplines and schools of thought dealing with human behavior at different levels of decision making. We demonstrate modelers' often vast degrees of freedom but also seek to make modelers aware of the often crucial consequences of seemingly innocent modeling assumptions. After discussing which socioeconomic units are potentially important for ESMs, we compare models of individual decision making that correspond to alternative behavioral theories and that make diverse modeling assumptions about individuals' preferences, beliefs, decision rules, and foresight. We review approaches to model social interaction, covering game theoretic frameworks, models of social influence, and network models. Finally, we discuss approaches to studying how the behavior of individuals, groups, and organizations can aggregate to complex collective phenomena, discussing agent-based, statistical, and representative-agent modeling and economic macro-dynamics. We illustrate the main ingredients of modeling techniques with examples from land-use dynamics as one of the main drivers of environmental change bridging local to global scales.

  8. The Wally plot approach to assess the calibration of clinical prediction models.

    Science.gov (United States)

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  9. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  10. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  11. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  12. Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''

    Science.gov (United States)

    Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.

    2011-05-01

    The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.

  13. On the validity of Brownian assumptions in the spin van der Waals model

    International Nuclear Information System (INIS)

    Oh, Suhk Kun

    1985-01-01

    A simple Brownian motion theory of the spin van der Waals model, which can be stationary, Markoffian or Gaussian, is studied. By comparing the Brownian motion theory with an exact theory called the generalized Langevin equation theory, the validity of the Brownian assumptions is tested. Thereby, it is shown explicitly how the Markoffian and Gaussian properties are modified in the spin van der Waals model under the influence of quantum fluctuations and long range ordering. (Author)

  14. Individual Change and the Timing and Onset of Important Life Events: Methods, Models, and Assumptions

    Science.gov (United States)

    Grimm, Kevin; Marcoulides, Katerina

    2016-01-01

    Researchers are often interested in studying how the timing of a specific event affects concurrent and future development. When faced with such research questions there are multiple statistical models to consider and those models are the focus of this paper as well as their theoretical underpinnings and assumptions regarding the nature of the…

  15. The place of quantitative energy models in a prospective approach

    International Nuclear Information System (INIS)

    Taverdet-Popiolek, N.

    2009-01-01

    Futurology above all depends on having the right mind set. Gaston Berger summarizes the prospective approach in 5 five main thrusts: prepare for the distant future, be open-minded (have a systems and multidisciplinary approach), carry out in-depth analyzes (draw out actors which are really determinant or the future, as well as established shed trends), take risks (imagine risky but flexible projects) and finally think about humanity, futurology being a technique at the service of man to help him build a desirable future. On the other hand, forecasting is based on quantified models so as to deduce 'conclusions' about the future. In the field of energy, models are used to draw up scenarios which allow, for instance, measuring medium or long term effects of energy policies on greenhouse gas emissions or global welfare. Scenarios are shaped by the model's inputs (parameters, sets of assumptions) and outputs. Resorting to a model or projecting by scenario is useful in a prospective approach as it ensures coherence for most of the variables that have been identified through systems analysis and that the mind on its own has difficulty to grasp. Interpretation of each scenario must be carried out in the light o the underlying framework of assumptions (the backdrop), developed during the prospective stage. When the horizon is far away (very long-term), the worlds imagined by the futurologist contain breaks (technological, behavioural and organizational) which are hard to integrate into the models. It is here that the main limit for the use of models in futurology is located. (author)

  16. Cloud-turbulence interactions: Sensitivity of a general circulation model to closure assumptions

    International Nuclear Information System (INIS)

    Brinkop, S.; Roeckner, E.

    1993-01-01

    Several approaches to parameterize the turbulent transport of momentum, heat, water vapour and cloud water for use in a general circulation model (GCM) have been tested in one-dimensional and three-dimensional model simulations. The schemes differ with respect to their closure assumptions (conventional eddy diffusivity model versus turbulent kinetic energy closure) and also regarding their treatment of cloud-turbulence interactions. The basis properties of these parameterizations are discussed first in column simulations of a stratocumulus-topped atmospheric boundary layer (ABL) under a strong subsidence inversion during the KONTROL experiment in the North Sea. It is found that the K-models tend to decouple the cloud layer from the adjacent layers because the turbulent activity is calculated from local variables. The higher-order scheme performs better in this respect because internally generated turbulence can be transported up and down through the action of turbulent diffusion. Thus, the TKE-scheme provides not only a better link between the cloud and the sub-cloud layer but also between the cloud and the inversion as a result of cloud-top entrainment. In the stratocumulus case study, where the cloud is confined by a pronounced subsidence inversion, increased entrainment favours cloud dilution through enhanced evaporation of cloud droplets. In the GCM study, however, additional cloud-top entrainment supports cloud formation because indirect cloud generating processes are promoted through efficient ventilation of the ABL, such as the enhanced moisture supply by surface evaporation and the increased depth of the ABL. As a result, tropical convection is more vigorous, the hydrological cycle is intensified, the whole troposphere becomes warmer and moister in general and the cloudiness in the upper part of the ABL is increased. (orig.)

  17. Pre-equilibrium assumptions and statistical model parameters effects on reaction cross-section calculations

    International Nuclear Information System (INIS)

    Avrigeanu, M.; Avrigeanu, V.

    1992-02-01

    A systematic study on effects of statistical model parameters and semi-classical pre-equilibrium emission models has been carried out for the (n,p) reactions on the 56 Fe and 60 Co target nuclei. The results obtained by using various assumptions within a given pre-equilibrium emission model differ among them more than the ones of different models used under similar conditions. The necessity of using realistic level density formulas is emphasized especially in connection with pre-equilibrium emission models (i.e. with the exciton state density expression), while a basic support could be found only by replacement of the Williams exciton state density formula with a realistic one. (author). 46 refs, 12 figs, 3 tabs

  18. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    Science.gov (United States)

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  19. A novel approach to the automatic control of scale model airplanes

    OpenAIRE

    Hua , Minh-Duc; Pucci , Daniele; Hamel , Tarek; Morin , Pascal; Samson , Claude

    2014-01-01

    International audience; — This paper explores a new approach to the control of scale model airplanes as an extension of previous studies addressing the case of vehicles presenting a symmetry of revolution about the thrust axis. The approach is intrinsically nonlinear and, with respect to other contributions on aircraft nonlinear control, no small attack angle assumption is made in order to enlarge the controller's operating domain. Simulation results conducted on a simplified, but not overly ...

  20. A novel approach of modeling continuous dark hydrogen fermentation.

    Science.gov (United States)

    Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos

    2018-02-01

    In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. TESTING THE ASSUMPTIONS AND INTERPRETING THE RESULTS OF THE RASCH MODEL USING LOG-LINEAR PROCEDURES IN SPSS

    NARCIS (Netherlands)

    TENVERGERT, E; GILLESPIE, M; KINGMA, J

    This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total

  2. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Directory of Open Access Journals (Sweden)

    Giordano James

    2010-01-01

    Full Text Available Abstract A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order, and what counts as abnormality (i.e.- disorder. The distinction(s between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice.

  3. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Science.gov (United States)

    2010-01-01

    A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. PMID:20109176

  4. Stable isotopes and elasmobranchs: tissue types, methods, applications and assumptions.

    Science.gov (United States)

    Hussey, N E; MacNeil, M A; Olin, J A; McMeans, B C; Kinney, M J; Chapman, D D; Fisk, A T

    2012-04-01

    Stable-isotope analysis (SIA) can act as a powerful ecological tracer with which to examine diet, trophic position and movement, as well as more complex questions pertaining to community dynamics and feeding strategies or behaviour among aquatic organisms. With major advances in the understanding of the methodological approaches and assumptions of SIA through dedicated experimental work in the broader literature coupled with the inherent difficulty of studying typically large, highly mobile marine predators, SIA is increasingly being used to investigate the ecology of elasmobranchs (sharks, skates and rays). Here, the current state of SIA in elasmobranchs is reviewed, focusing on available tissues for analysis, methodological issues relating to the effects of lipid extraction and urea, the experimental dynamics of isotopic incorporation, diet-tissue discrimination factors, estimating trophic position, diet and mixing models and individual specialization and niche-width analyses. These areas are discussed in terms of assumptions made when applying SIA to the study of elasmobranch ecology and the requirement that investigators standardize analytical approaches. Recommendations are made for future SIA experimental work that would improve understanding of stable-isotope dynamics and advance their application in the study of sharks, skates and rays. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  5. Formalization and Analysis of Reasoning by Assumption

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning

  6. A narrow-band k-distribution model with single mixture gas assumption for radiative flows

    Science.gov (United States)

    Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon

    2018-06-01

    In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.

  7. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  8. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  9. The 'revealed preferences' theory: Assumptions and conjectures

    International Nuclear Information System (INIS)

    Green, C.H.

    1983-01-01

    Being kind of intuitive psychology the 'Revealed-Preferences'- theory based approaches towards determining the acceptable risks are a useful method for the generation of hypotheses. In view of the fact that reliability engineering develops faster than methods for the determination of reliability aims the Revealed-Preferences approach is a necessary preliminary help. Some of the assumptions on which the 'Revealed-Preferences' theory is based will be identified and analysed and afterwards compared with experimentally obtained results. (orig./DG) [de

  10. Simulating residential demand response: Improving socio-technical assumptions in activity-based models of energy demand

    OpenAIRE

    McKenna, E.; Higginson, S.; Grunewald, P.; Darby, S. J.

    2017-01-01

    Demand response is receiving increasing interest as a new form of flexibility within low-carbon power systems. Energy models are an important tool to assess the potential capability of demand side contributions. This paper critically reviews the assumptions in current models and introduces a new conceptual framework to better facilitate such an assessment. We propose three dimensions along which change could occur, namely technology, activities and service expectations. Using this framework, ...

  11. Assessing framing assumptions in quantitative health impact assessments: a housing intervention example.

    Science.gov (United States)

    Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M

    2013-09-01

    Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Contextuality under weak assumptions

    International Nuclear Information System (INIS)

    Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D

    2017-01-01

    The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove

  13. Assumptions and Policy Decisions for Vital Area Identification Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myungsu; Bae, Yeon-Kyoung; Lee, Youngseung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    U.S. Nuclear Regulatory Commission and IAEA guidance indicate that certain assumptions and policy questions should be addressed to a Vital Area Identification (VAI) process. Korea Hydro and Nuclear Power conducted a VAI based on current Design Basis Threat and engineering judgement to identify APR1400 vital areas. Some of the assumptions were inherited from Probabilistic Safety Assessment (PSA) as a sabotage logic model was based on PSA logic tree and equipment location data. This paper illustrates some important assumptions and policy decisions for APR1400 VAI analysis. Assumptions and policy decisions could be overlooked at the beginning stage of VAI, however they should be carefully reviewed and discussed among engineers, plant operators, and regulators. Through APR1400 VAI process, some of the policy concerns and assumptions for analysis were applied based on document research and expert panel discussions. It was also found that there are more assumptions to define for further studies for other types of nuclear power plants. One of the assumptions is mission time, which was inherited from PSA.

  14. Holistic approach to education and upbringing: Contradictory to the general assumption of life

    Directory of Open Access Journals (Sweden)

    Mihajlović Ljubiša M.

    2014-01-01

    Full Text Available Holistic education is a compprehensive view of education based on the assumption that each individual finds his own identity, meaning and objective in life through the connection with the community, nature and human values such as compassion and peace. Within holistic education the teacher is viewed not as an authority figure who guides and controls, but rather as a 'friend', a facilitator of learning: a guide and a companion in gaining experience. The norm is cooperation rather than competition. However, is this possible in real life? The answer is simple - it is not. Why? The reason why lies in the foundation of life itself: a molecule built in such a way that it does not permit such an idealistic approach to life, and therefore, to education. It is a DNK molecule: the molecule of life exhibiting, among other, the following characteristics: it seeks procreation, and exhibits the tendency of eternal struggle, competition. This is in stark opposition to holistic approach to education which does not recognize competition, struggle, gradation and rivalry. The development of an advanced and socially responsible society demands partial, measured application of holism. This needs to be reflected in education as well: approved competition, clear and fair gradation, the best in certain areas become the elite, with the rest following or to be found solutions in accordance with their abilities.

  15. The error and covariance structures of the mean approach model of pooled cross-section and time series data

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    1991-01-01

    This paper postulates the assumptions underlying the Mean Approach model and recasts the re-expressions of the normal equations of this model in partitioned matrices of covariances. These covariance structures have been analysed. (author). 16 refs

  16. Has the "Equal Environments" assumption been tested in twin studies?

    Science.gov (United States)

    Eaves, Lindon; Foley, Debra; Silberg, Judy

    2003-12-01

    A recurring criticism of the twin method for quantifying genetic and environmental components of human differences is the necessity of the so-called "equal environments assumption" (EEA) (i.e., that monozygotic and dizygotic twins experience equally correlated environments). It has been proposed to test the EEA by stratifying twin correlations by indices of the amount of shared environment. However, relevant environments may also be influenced by genetic differences. We present a model for the role of genetic factors in niche selection by twins that may account for variation in indices of the shared twin environment (e.g., contact between members of twin pairs). Simulations reveal that stratification of twin correlations by amount of contact can yield spurious evidence of large shared environmental effects in some strata and even give false indications of genotype x environment interaction. The stratification approach to testing the equal environments assumption may be misleading and the results of such tests may actually be consistent with a simpler theory of the role of genetic factors in niche selection.

  17. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  18. A hybrid modeling approach for option pricing

    Science.gov (United States)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  19. NONLINEAR MODELS FOR DESCRIPTION OF CACAO FRUIT GROWTH WITH ASSUMPTION VIOLATIONS

    Directory of Open Access Journals (Sweden)

    JOEL AUGUSTO MUNIZ

    2017-01-01

    Full Text Available Cacao (Theobroma cacao L. is an important fruit in the Brazilian economy, which is mainly cultivated in the southern State of Bahia. The optimal stage for harvesting is a major factor for fruit quality and the knowledge on its growth curves can help, especially in identifying the ideal maturation stage for harvesting. Nonlinear regression models have been widely used for description of growth curves. However, several studies in this subject do not consider the residual analysis, the existence of a possible dependence between longitudinal observations, or the sample variance heterogeneity, compromising the modeling quality. The objective of this work was to compare the fit of nonlinear regression models, considering residual analysis and assumption violations, in the description of the cacao (clone Sial-105 fruit growth. The data evaluated were extracted from Brito and Silva (1983, who conducted the experiment in the Cacao Research Center, Ilheus, State of Bahia. The variables fruit length, diameter and volume as a function of fruit age were studied. The use of weighting and incorporation of residual dependencies was efficient, since the modeling became more consistent, improving the model fit. Considering the first-order autoregressive structure, when needed, leads to significant reduction in the residual standard deviation, making the estimates more reliable. The Logistic model was the most efficient for the description of the cacao fruit growth.

  20. Shattering world assumptions: A prospective view of the impact of adverse events on world assumptions.

    Science.gov (United States)

    Schuler, Eric R; Boals, Adriel

    2016-05-01

    Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. The SR Approach: a new Estimation Method for Non-Linear and Non-Gaussian Dynamic Term Structure Models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Christensen, Bent Jesper

    This paper suggests a new and easy approach to estimate linear and non-linear dynamic term structure models with latent factors. We impose no distributional assumptions on the factors and they may therefore be non-Gaussian. The novelty of our approach is to use many observables (yields or bonds p...

  2. Adult Learning Assumptions

    Science.gov (United States)

    Baskas, Richard S.

    2011-01-01

    The purpose of this study is to examine Knowles' theory of andragogy and his six assumptions of how adults learn while providing evidence to support two of his assumptions based on the theory of andragogy. As no single theory explains how adults learn, it can best be assumed that adults learn through the accumulation of formal and informal…

  3. Unrealistic Assumptions in Economics: an Analysis under the Logic of Socioeconomic Processes

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2014-11-01

    Full Text Available The realism of assumptions is an ongoing debate within the philosophy of economics. One of the most referenced papers in this matter belongs to Milton Friedman. He defends the use of unrealistic assumptions, not only because of a pragmatic issue, but also the intrinsic difficulties of determining the extent of realism. On the other hand, realists have criticized (and still do today the use of unrealistic assumptions - such as the assumption of rational choice, perfect information, homogeneous goods, etc. However, they did not accompany their statements with a proper epistemological argument that supports their positions. In this work it is expected to show that the realism of (a particular sort of assumptions is clearly relevant when examining economic models, since the system under study (the real economies is not compatible with logic of invariance and of mechanisms, but with the logic of possibility trees. Because of this, models will not function as tools for predicting outcomes, but as representations of alternative scenarios, whose similarity to the real world will be examined in terms of the verisimilitude of a class of model assumptions

  4. Detecting and accounting for violations of the constancy assumption in non-inferiority clinical trials.

    Science.gov (United States)

    Koopmeiners, Joseph S; Hobbs, Brian P

    2018-05-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.

  5. Comparative Interpretation of Classical and Keynesian Fiscal Policies (Assumptions, Principles and Primary Opinions

    Directory of Open Access Journals (Sweden)

    Engin Oner

    2015-06-01

    Full Text Available Adam Smith being its founder, in the Classical School, which gives prominence to supply and adopts an approach of unbiased finance, the economy is always in a state of full employment equilibrium. In this system of thought, the main philosophy of which is budget balance, that asserts that there is flexibility between prices and wages and regards public debt as an extraordinary instrument, the interference of the state with the economic and social life is frowned upon. In line with the views of the classical thought, the classical fiscal policy is based on three basic assumptions. These are the "Consumer State Assumption", the assumption accepting that "Public Expenditures are Always Ineffectual" and the assumption concerning the "Impartiality of the Taxes and Expenditure Policies Implemented by the State". On the other hand, the Keynesian School founded by John Maynard Keynes, gives prominence to demand, adopts the approach of functional finance, and asserts that cases of underemployment equilibrium and over-employment equilibrium exist in the economy as well as the full employment equilibrium, that problems cannot be solved through the invisible hand, that prices and wages are strict, the interference of the state is essential and at this point fiscal policies have to be utilized effectively.Keynesian fiscal policy depends on three primary assumptions. These are the assumption of "Filter State", the assumption that "public expenditures are sometimes effective and sometimes ineffective or neutral" and the assumption that "the tax, debt and expenditure policies of the state can never be impartial". 

  6. A novel modeling approach for job shop scheduling problem under uncertainty

    Directory of Open Access Journals (Sweden)

    Behnam Beheshti Pur

    2013-11-01

    Full Text Available When aiming on improving efficiency and reducing cost in manufacturing environments, production scheduling can play an important role. Although a common workshop is full of uncertainties, when using mathematical programs researchers have mainly focused on deterministic problems. After briefly reviewing and discussing popular modeling approaches in the field of stochastic programming, this paper proposes a new approach based on utility theory for a certain range of problems and under some practical assumptions. Expected utility programming, as the proposed approach, will be compared with the other well-known methods and its meaningfulness and usefulness will be illustrated via a numerical examples and a real case.

  7. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    Science.gov (United States)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  8. A Comparison of Modeling Approaches in Simulating Chlorinated Ethene Removal in a Constructed Wetland by a Microbial Consortia

    National Research Council Canada - National Science Library

    Campbell, Jason

    2002-01-01

    ... of the modeling approaches affect simulation results. Concepts like microbial growth in the form of a biofilm and spatially varying contaminant concentrations bring the validity of the CSTR assumption into question...

  9. Studies on the effect of flaw detection probability assumptions on risk reduction at inspection

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K.; Cronvall, O.; Maennistoe, I. (VTT Technical Research Centre of Finland (Finland)); Gunnars, J.; Alverlind, L.; Dillstroem, P. (Inspecta Technology, Stockholm (Sweden)); Gandossi, L. (European Commission Joint Research Centre, Brussels (Belgium))

    2009-12-15

    The aim of the project was to study the effect of POD assumptions on failure probability using structural reliability models. The main interest was to investigate whether it is justifiable to use a simplified POD curve e.g. in risk-informed in-service inspection (RI-ISI) studies. The results of the study indicate that the use of a simplified POD curve could be justifiable in RI-ISI applications. Another aim was to compare various structural reliability calculation approaches for a set of cases. Through benchmarking one can identify differences and similarities between modelling approaches, and provide added confidence on models and identify development needs. Comparing the leakage probabilities calculated by different approaches at the end of plant lifetime (60 years) shows that the results are very similar when inspections are not accounted for. However, when inspections are taken into account the predicted order of magnitude differs. Further studies would be needed to investigate the reasons for the differences. Development needs and plans for the benchmarked structural reliability models are discussed. (author)

  10. Studies on the effect of flaw detection probability assumptions on risk reduction at inspection

    International Nuclear Information System (INIS)

    Simola, K.; Cronvall, O.; Maennistoe, I.; Gunnars, J.; Alverlind, L.; Dillstroem, P.; Gandossi, L.

    2009-12-01

    The aim of the project was to study the effect of POD assumptions on failure probability using structural reliability models. The main interest was to investigate whether it is justifiable to use a simplified POD curve e.g. in risk-informed in-service inspection (RI-ISI) studies. The results of the study indicate that the use of a simplified POD curve could be justifiable in RI-ISI applications. Another aim was to compare various structural reliability calculation approaches for a set of cases. Through benchmarking one can identify differences and similarities between modelling approaches, and provide added confidence on models and identify development needs. Comparing the leakage probabilities calculated by different approaches at the end of plant lifetime (60 years) shows that the results are very similar when inspections are not accounted for. However, when inspections are taken into account the predicted order of magnitude differs. Further studies would be needed to investigate the reasons for the differences. Development needs and plans for the benchmarked structural reliability models are discussed. (author)

  11. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches.

    Directory of Open Access Journals (Sweden)

    Mathieu Andraud

    Full Text Available Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component. For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.

  12. Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.

    Science.gov (United States)

    Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A

    2017-04-01

    The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Modeling assumptions influence on stress and strain state in 450 t cranes hoisting winch construction

    Directory of Open Access Journals (Sweden)

    Damian GĄSKA

    2011-01-01

    Full Text Available This work investigates the FEM simulation of stress and strain state of the selected trolley’s load-carrying structure with 450 tones hoisting capacity [1]. Computational loads were adopted as in standard PN-EN 13001-2. Model of trolley was built from several cooperating with each other (in contact parts. The influence of model assumptions (simplification in selected construction nodes to the value of maximum stress and strain with its area of occurrence was being analyzed. The aim of this study was to determine whether the simplification, which reduces the time required to prepare the model and perform calculations (e.g., rigid connection instead of contact are substantially changing the characteristics of the model.

  14. Modelling carbonaceous aerosol from residential solid fuel burning with different assumptions for emissions

    Directory of Open Access Journals (Sweden)

    R. Ots

    2018-04-01

    Full Text Available Evidence is accumulating that emissions of primary particulate matter (PM from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012, as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source. The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist – all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than

  15. Modelling carbonaceous aerosol from residential solid fuel burning with different assumptions for emissions

    Science.gov (United States)

    Ots, Riinu; Heal, Mathew R.; Young, Dominique E.; Williams, Leah R.; Allan, James D.; Nemitz, Eiko; Di Marco, Chiara; Detournay, Anais; Xu, Lu; Ng, Nga L.; Coe, Hugh; Herndon, Scott C.; Mackenzie, Ian A.; Green, David C.; Kuenen, Jeroen J. P.; Reis, Stefan; Vieno, Massimo

    2018-04-01

    Evidence is accumulating that emissions of primary particulate matter (PM) from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal) burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012), as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source). The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA) component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist - all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC) concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than inventory

  16. Evaluating methodological assumptions of a catch-curve survival estimation of unmarked precocial shorebird chickes

    Science.gov (United States)

    McGowan, Conor P.; Gardner, Beth

    2013-01-01

    Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.

  17. Temporal Distinctiveness in Task Switching: Assessing the Mixture-Distribution Assumption

    Directory of Open Access Journals (Sweden)

    James A Grange

    2016-02-01

    Full Text Available In task switching, increasing the response--cue interval has been shown to reduce the switch cost. This has been attributed to a time-based decay process influencing the activation of memory representations of tasks (task-sets. Recently, an alternative account based on interference rather than decay has been successfully applied to this data (Horoufchin et al., 2011. In this account, variation of the RCI is thought to influence the temporal distinctiveness (TD of episodic traces in memory, thus affecting their retrieval probability. This can affect performance as retrieval probability influences response time: If retrieval succeeds, responding is fast due to positive priming; if retrieval fails, responding is slow, due to having to perform the task via a slow algorithmic process. This account---and a recent formal model (Grange & Cross, 2015---makes the strong prediction that all RTs are a mixture of one of two processes: a fast process when retrieval succeeds, and a slow process when retrieval fails. The present paper assesses the evidence for this mixture-distribution assumption in TD data. In a first section, statistical evidence for mixture-distributions is found using the fixed-point property test. In a second section, a mathematical process model with mixture-distributions at its core is fitted to the response time distribution data. Both approaches provide good evidence in support of the mixture-distribution assumption, and thus support temporal distinctiveness accounts of the data.

  18. Spatial modelling of assumption of tourism development with geographic IT using

    Directory of Open Access Journals (Sweden)

    Jitka Machalová

    2010-01-01

    Full Text Available The aim of this article is to show the possibilities of spatial modelling and analysing of assumptions of tourism development in the Czech Republic with the objective to make decision-making processes in tourism easier and more efficient (for companies, clients as well as destination managements. The development and placement of tourism depend on the factors (conditions that influence its application in specific areas. These factors are usually divided into three groups: selective, localization and realization. Tourism is inseparably connected with space – countryside. The countryside can be modelled and consecutively analysed by the means of geographical information technologies. With the help of spatial modelling and following analyses the localization and realization conditions in the regions of the Czech Republic have been evaluated. The best localization conditions have been found in the Liberecký region. The capital city of Prague has negligible natural conditions; however, those social ones are on a high level. Next, the spatial analyses have shown that the best realization conditions are provided by the capital city of Prague. Then the Central-Bohemian, South-Moravian, Moravian-Silesian and Karlovarský regions follow. The development of tourism destination is depended not only on the localization and realization factors but it is basically affected by the level of local destination management. Spatial modelling can help destination managers in decision-making processes in order to optimal use of destination potential and efficient targeting their marketing activities.

  19. Stability and disease persistence in an age-structured SIS epidemic model with vertical transmission and proportionate mixing assumption

    International Nuclear Information System (INIS)

    El-Doma, M.

    2001-02-01

    The stability of the endemic equilibrium of an SIS age-structured epidemic model of a vertically as well as horizontally transmitted disease is investigated when the force of infection is of proportionate mixing assumption type. We also investigate the uniform weak disease persistence. (author)

  20. Drug policy in sport: hidden assumptions and inherent contradictions.

    Science.gov (United States)

    Smith, Aaron C T; Stewart, Bob

    2008-03-01

    This paper considers the assumptions underpinning the current drugs-in-sport policy arrangements. We examine the assumptions and contradictions inherent in the policy approach, paying particular attention to the evidence that supports different policy arrangements. We find that the current anti-doping policy of the World Anti-Doping Agency (WADA) contains inconsistencies and ambiguities. WADA's policy position is predicated upon four fundamental principles; first, the need for sport to set a good example; secondly, the necessity of ensuring a level playing field; thirdly, the responsibility to protect the health of athletes; and fourthly, the importance of preserving the integrity of sport. A review of the evidence, however, suggests that sport is a problematic institution when it comes to setting a good example for the rest of society. Neither is it clear that sport has an inherent or essential integrity that can only be sustained through regulation. Furthermore, it is doubtful that WADA's anti-doping policy is effective in maintaining a level playing field, or is the best means of protecting the health of athletes. The WADA anti-doping policy is based too heavily on principals of minimising drug use, and gives insufficient weight to the minimisation of drug-related harms. As a result drug-related harms are being poorly managed in sport. We argue that anti-doping policy in sport would benefit from placing greater emphasis on a harm minimisation model.

  1. A Nonparametric Operational Risk Modeling Approach Based on Cornish-Fisher Expansion

    Directory of Open Access Journals (Sweden)

    Xiaoqian Zhu

    2014-01-01

    Full Text Available It is generally accepted that the choice of severity distribution in loss distribution approach has a significant effect on the operational risk capital estimation. However, the usually used parametric approaches with predefined distribution assumption might be not able to fit the severity distribution accurately. The objective of this paper is to propose a nonparametric operational risk modeling approach based on Cornish-Fisher expansion. In this approach, the samples of severity are generated by Cornish-Fisher expansion and then used in the Monte Carlo simulation to sketch the annual operational loss distribution. In the experiment, the proposed approach is employed to calculate the operational risk capital charge for the overall Chinese banking. The experiment dataset is the most comprehensive operational risk dataset in China as far as we know. The results show that the proposed approach is able to use the information of high order moments and might be more effective and stable than the usually used parametric approach.

  2. Being Explicit about Underlying Values, Assumptions and Views when Designing for Children in the IDC Community

    DEFF Research Database (Denmark)

    Skovbjerg, Helle Marie; Bekker, Tilde; Barendregt, Wolmet

    2016-01-01

    In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....

  3. Assessing moderated mediation in linear models requires fewer confounding assumptions than assessing mediation.

    Science.gov (United States)

    Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2016-11-01

    It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up. © 2016 The British Psychological Society.

  4. Influence of model assumptions about HIV disease progression after initiating or stopping treatment on estimates of infections and deaths averted by scaling up antiretroviral therapy

    Science.gov (United States)

    Sucharitakul, Kanes; Boily, Marie-Claude; Dimitrov, Dobromir

    2018-01-01

    Background Many mathematical models have investigated the population-level impact of expanding antiretroviral therapy (ART), using different assumptions about HIV disease progression on ART and among ART dropouts. We evaluated the influence of these assumptions on model projections of the number of infections and deaths prevented by expanded ART. Methods A new dynamic model of HIV transmission among men who have sex with men (MSM) was developed, which incorporated each of four alternative assumptions about disease progression used in previous models: (A) ART slows disease progression; (B) ART halts disease progression; (C) ART reverses disease progression by increasing CD4 count; (D) ART reverses disease progression, but disease progresses rapidly once treatment is stopped. The model was independently calibrated to HIV prevalence and ART coverage data from the United States under each progression assumption in turn. New HIV infections and HIV-related deaths averted over 10 years were compared for fixed ART coverage increases. Results Little absolute difference (ART coverage (varied between 33% and 90%) if ART dropouts reinitiated ART at the same rate as ART-naïve MSM. Larger differences in the predicted fraction of HIV-related deaths averted were observed (up to 15pp). However, if ART dropouts could only reinitiate ART at CD4ART interruption did not affect the fraction of HIV infections averted with expanded ART, unless ART dropouts only re-initiated ART at low CD4 counts. Different disease progression assumptions had a larger influence on the fraction of HIV-related deaths averted with expanded ART. PMID:29554136

  5. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-05-23

    Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.

  6. Predicting salt intrusion into freshwater aquifers resulting from CO2 injection – A study on the influence of conservative assumptions

    DEFF Research Database (Denmark)

    Walter, Lena; Binning, Philip John; Class, Holger

    2013-01-01

    . A crucial task is to choose an appropriate conceptual model and relevant scenarios. Overly conservative assumptions may lead to estimation of unacceptably high risks, and thus prevent the implementation of a CO2 storage project unnecessarily. On the other hand, risk assessment should not lead...... to an underestimation of hazards. This study compares two conceptual model approaches for the numerical simulation of brine-migration scenarios through a vertical fault and salt intrusion into a fresh water aquifer. The first approach calculates salt discharge into freshwater using an immiscible two-phase model...... with constant salinity in the brine phase. The second approach takes compositional effects into account and considers salinity as a variable parameter in the water phase. A spatial model coupling is introduced to adapt the increased model complexity to the required complexity of the physics. The immiscible two...

  7. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    Science.gov (United States)

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  8. Towards a realistic approach to validation of reactive transport models for performance assessment

    International Nuclear Information System (INIS)

    Siegel, M.D.

    1993-01-01

    Performance assessment calculations are based on geochemical models that assume that interactions among radionuclides, rocks and groundwaters under natural conditions, can be estimated or bound by data obtained from laboratory-scale studies. The data include radionuclide distribution coefficients, measured in saturated batch systems of powdered rocks, and retardation factors measured in short-term column experiments. Traditional approaches to model validation cannot be applied in a straightforward manner to the simple reactive transport models that use these data. An approach to model validation in support of performance assessment is described in this paper. It is based on a recognition of different levels of model validity and is compatible with the requirements of current regulations for high-level waste disposal. Activities that are being carried out in support of this approach include (1) laboratory and numerical experiments to test the validity of important assumptions inherent in current performance assessment methodologies,(2) integrated transport experiments, and (3) development of a robust coupled reaction/transport code for sensitivity analyses using massively parallel computers

  9. A feature-based approach to modeling protein-DNA interactions.

    Directory of Open Access Journals (Sweden)

    Eilon Sharon

    Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.

  10. Interface Input/Output Automata: Splitting Assumptions from Guarantees

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Nyman, Ulrik; Wasowski, Andrzej

    2006-01-01

    's \\IOAs [11], relying on a context dependent notion of refinement based on relativized language inclusion. There are two main contributions of the work. First, we explicitly separate assumptions from guarantees, increasing the modeling power of the specification language and demonstrating an interesting...

  11. Modeling soil CO2 production and transport with dynamic source and diffusion terms: testing the steady-state assumption using DETECT v1.0

    Science.gov (United States)

    Ryan, Edmund M.; Ogle, Kiona; Kropp, Heather; Samuels-Crow, Kimberly E.; Carrillo, Yolima; Pendall, Elise

    2018-05-01

    The flux of CO2 from the soil to the atmosphere (soil respiration, Rsoil) is a major component of the global carbon (C) cycle. Methods to measure and model Rsoil, or partition it into different components, often rely on the assumption that soil CO2 concentrations and fluxes are in steady state, implying that Rsoil is equal to the rate at which CO2 is produced by soil microbial and root respiration. Recent research, however, questions the validity of this assumption. Thus, the aim of this work was two-fold: (1) to describe a non-steady state (NSS) soil CO2 transport and production model, DETECT, and (2) to use this model to evaluate the environmental conditions under which Rsoil and CO2 production are likely in NSS. The backbone of DETECT is a non-homogeneous, partial differential equation (PDE) that describes production and transport of soil CO2, which we solve numerically at fine spatial and temporal resolution (e.g., 0.01 m increments down to 1 m, every 6 h). Production of soil CO2 is simulated for every depth and time increment as the sum of root respiration and microbial decomposition of soil organic matter. Both of these factors can be driven by current and antecedent soil water content and temperature, which can also vary by time and depth. We also analytically solved the ordinary differential equation (ODE) corresponding to the steady-state (SS) solution to the PDE model. We applied the DETECT NSS and SS models to the six-month growing season period representative of a native grassland in Wyoming. Simulation experiments were conducted with both model versions to evaluate factors that could affect departure from SS, such as (1) varying soil texture; (2) shifting the timing or frequency of precipitation; and (3) with and without the environmental antecedent drivers. For a coarse-textured soil, Rsoil from the SS model closely matched that of the NSS model. However, in a fine-textured (clay) soil, growing season Rsoil was ˜ 3 % higher under the assumption of

  12. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  13. Sensitivity of tsunami evacuation modeling to direction and land cover assumptions

    Science.gov (United States)

    Schmidtlein, Mathew C.; Wood, Nathan J.

    2015-01-01

    Although anisotropic least-cost-distance (LCD) modeling is becoming a common tool for estimating pedestrian-evacuation travel times out of tsunami hazard zones, there has been insufficient attention paid to understanding model sensitivity behind the estimates. To support tsunami risk-reduction planning, we explore two aspects of LCD modeling as it applies to pedestrian evacuations and use the coastal community of Seward, Alaska, as our case study. First, we explore the sensitivity of modeling to the direction of movement by comparing standard safety-to-hazard evacuation times to hazard-to-safety evacuation times for a sample of 3985 points in Seward's tsunami-hazard zone. Safety-to-hazard evacuation times slightly overestimated hazard-to-safety evacuation times but the strong relationship to the hazard-to-safety evacuation times, slightly conservative bias, and shorter processing times of the safety-to-hazard approach make it the preferred approach. Second, we explore how variations in land cover speed conservation values (SCVs) influence model performance using a Monte Carlo approach with one thousand sets of land cover SCVs. The LCD model was relatively robust to changes in land cover SCVs with the magnitude of local model sensitivity greatest in areas with higher evacuation times or with wetland or shore land cover types, where model results may slightly underestimate travel times. This study demonstrates that emergency managers should be concerned not only with populations in locations with evacuation times greater than wave arrival times, but also with populations with evacuation times lower than but close to expected wave arrival times, particularly if they are required to cross wetlands or beaches.

  14. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  15. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Science.gov (United States)

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  16. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Directory of Open Access Journals (Sweden)

    Anja F. Ernst

    2017-05-01

    Full Text Available Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  17. Consistency analysis of subspace identification methods based on a linear regression approach

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2001-01-01

    In the literature results can be found which claim consistency for the subspace method under certain quite weak assumptions. Unfortunately, a new result gives a counter example showing inconsistency under these assumptions and then gives new more strict sufficient assumptions which however does n...... not include important model structures as e.g. Box-Jenkins. Based on a simple least squares approach this paper shows the possible inconsistency under the weak assumptions and develops only slightly stricter assumptions sufficient for consistency and which includes any model structure...

  18. A comprehensive approach to age-dependent dosimetric modeling

    International Nuclear Information System (INIS)

    Leggett, R.W.; Cristy, M.; Eckerman, K.F.

    1986-01-01

    In the absence of age-specific biokinetic models, current retention models of the International Commission on Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper we discuss a comprehensive approach to age-dependent dosimetric modeling in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates or risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks

  19. A comprehensive approach to age-dependent dosimetric modeling

    International Nuclear Information System (INIS)

    Leggett, R.W.; Cristy, M.; Eckerman, K.F.

    1987-01-01

    In the absence of age-specific biokinetic models, current retention models of the International Commission of Radiological Protection (ICRP) frequently are used as a point of departure for evaluation of exposures to the general population. These models were designed and intended for estimation of long-term integrated doses to the adult worker. Their format and empirical basis preclude incorporation of much valuable physiological information and physiologically reasonable assumptions that could be used in characterizing the age-specific behavior of radioelements in humans. In this paper a comprehensive approach to age-dependent dosimetric modeling is discussed in which consideration is given not only to changes with age in masses and relative geometries of body organs and tissues but also to best available physiological and radiobiological information relating to the age-specific biobehavior of radionuclides. This approach is useful in obtaining more accurate estimates of long-term dose commitments as a function of age at intake, but it may be particularly valuable in establishing more accurate estimates of dose rate as a function of age. Age-specific dose rates are needed for a proper analysis of the potential effects on estimates of risk of elevated dose rates per unit intake in certain stages of life, elevated response per unit dose received during some stages of life, and age-specific non-radiogenic competing risks. 16 refs.; 3 figs.; 1 table

  20. New Assumptions to Guide SETI Research

    Science.gov (United States)

    Colombano, S. P.

    2018-01-01

    The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.

  1. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Science.gov (United States)

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  2. Combining engineering and data-driven approaches: Development of a generic fire risk model facilitating calibration

    DEFF Research Database (Denmark)

    De Sanctis, G.; Fischer, K.; Kohler, J.

    2014-01-01

    Fire risk models support decision making for engineering problems under the consistent consideration of the associated uncertainties. Empirical approaches can be used for cost-benefit studies when enough data about the decision problem are available. But often the empirical approaches...... a generic risk model that is calibrated to observed fire loss data. Generic risk models assess the risk of buildings based on specific risk indicators and support risk assessment at a portfolio level. After an introduction to the principles of generic risk assessment, the focus of the present paper...... are not detailed enough. Engineering risk models, on the other hand, may be detailed but typically involve assumptions that may result in a biased risk assessment and make a cost-benefit study problematic. In two related papers it is shown how engineering and data-driven modeling can be combined by developing...

  3. Multiscale modeling of alloy solidification using a database approach

    Science.gov (United States)

    Tan, Lijian; Zabaras, Nicholas

    2007-11-01

    A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required

  4. Resolving the double tension: Toward a new approach to measurement modeling in cross-national research

    Science.gov (United States)

    Medina, Tait Runnfeldt

    The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and

  5. Data and methods to characterize the role of sex work and to inform sex work programs in generalized HIV epidemics: evidence to challenge assumptions.

    Science.gov (United States)

    Mishra, Sharmistha; Boily, Marie-Claude; Schwartz, Sheree; Beyrer, Chris; Blanchard, James F; Moses, Stephen; Castor, Delivette; Phaswana-Mafuya, Nancy; Vickerman, Peter; Drame, Fatou; Alary, Michel; Baral, Stefan D

    2016-08-01

    In the context of generalized human immunodeficiency virus (HIV) epidemics, there has been limited recent investment in HIV surveillance and prevention programming for key populations including female sex workers. Often implicit in the decision to limit investment in these epidemic settings are assumptions including that commercial sex is not significant to the sustained transmission of HIV, and HIV interventions designed to reach "all segments of society" will reach female sex workers and clients. Emerging empiric and model-based evidence is challenging these assumptions. This article highlights the frameworks and estimates used to characterize the role of sex work in HIV epidemics as well as the relevant empiric data landscape on sex work in generalized HIV epidemics and their strengths and limitations. Traditional approaches to estimate the contribution of sex work to HIV epidemics do not capture the potential for upstream and downstream sexual and vertical HIV transmission. Emerging approaches such as the transmission population attributable fraction from dynamic mathematical models can address this gap. To move forward, the HIV scientific community must begin by replacing assumptions about the epidemiology of generalized HIV epidemics with data and more appropriate methods of estimating the contribution of unprotected sex in the context of sex work. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    ) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...

  7. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  8. Bioaccumulation factors and the steady state assumption for cesium isotopes in aquatic foodwebs near nuclear facilities.

    Science.gov (United States)

    Rowan, D J

    2013-07-01

    Steady state approaches, such as transfer coefficients or bioaccumulation factors, are commonly used to model the bioaccumulation of (137)Cs in aquatic foodwebs from routine operations and releases from nuclear generating stations and other nuclear facilities. Routine releases from nuclear generating stations and facilities, however, often consist of pulses as liquid waste is stored, analyzed to ensure regulatory compliance and then released. The effect of repeated pulse releases on the steady state assumption inherent in the bioaccumulation factor approach has not been evaluated. In this study, I examine the steady state assumption for aquatic biota by analyzing data for two cesium isotopes in the same biota, one isotope in steady state (stable (133)Cs) from geologic sources and the other released in pulses ((137)Cs) from reactor operations. I also compare (137)Cs bioaccumulation factors for similar upstream populations from the same system exposed solely to weapon test (137)Cs, and assumed to be in steady state. The steady state assumption appears to be valid for small organisms at lower trophic levels (zooplankton, rainbow smelt and 0+ yellow perch) but not for older and larger fish at higher trophic levels (walleye). Attempts to account for previous exposure and retention through a biokinetics approach had a similar effect on steady state, upstream and non-steady state, downstream populations of walleye, but were ineffective in explaining the more or less constant deviation between fish with steady state exposures and non-steady state exposures of about 2-fold for all age classes of walleye. These results suggest that for large, piscivorous fish, repeated exposure to short duration, pulse releases leads to much higher (137)Cs BAFs than expected from (133)Cs BAFs for the same fish or (137)Cs BAFs for similar populations in the same system not impacted by reactor releases. These results suggest that the steady state approach should be used with caution in any

  9. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  10. Leakage-Resilient Circuits without Computational Assumptions

    DEFF Research Database (Denmark)

    Dziembowski, Stefan; Faust, Sebastian

    2012-01-01

    Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage...... on computational assumptions, our results are purely information-theoretic. In particular, we do not make use of public key encryption, which was required in all previous works...... into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied...

  11. How Symmetrical Assumptions Advance Strategic Management Research

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Hallberg, Hallberg

    2014-01-01

    We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other...... application domains of the theory. We argue that assumptional symmetry leads to theoretical advancement by promoting the development of theory with greater falsifiability and stronger ontological grounding. Thus, strategic management theory may be advanced by systematically searching for asymmetrical...

  12. Usefulness of an equal-probability assumption for out-of-equilibrium states: A master equation approach

    KAUST Repository

    Nogawa, Tomoaki

    2012-10-18

    We examine the effectiveness of assuming an equal probability for states far from equilibrium. For this aim, we propose a method to construct a master equation for extensive variables describing nonstationary nonequilibrium dynamics. The key point of the method is the assumption that transient states are equivalent to the equilibrium state that has the same extensive variables, i.e., an equal probability holds for microscopic states in nonequilibrium. We demonstrate an application of this method to the critical relaxation of the two-dimensional Potts model by Monte Carlo simulations. While the one-variable description, which is adequate for equilibrium, yields relaxation dynamics that are very fast, the redundant two-variable description well reproduces the true dynamics quantitatively. These results suggest that some class of the nonequilibrium state can be described with a small extension of degrees of freedom, which may lead to an alternative way to understand nonequilibrium phenomena. © 2012 American Physical Society.

  13. Usefulness of an equal-probability assumption for out-of-equilibrium states: A master equation approach

    KAUST Repository

    Nogawa, Tomoaki; Ito, Nobuyasu; Watanabe, Hiroshi

    2012-01-01

    We examine the effectiveness of assuming an equal probability for states far from equilibrium. For this aim, we propose a method to construct a master equation for extensive variables describing nonstationary nonequilibrium dynamics. The key point of the method is the assumption that transient states are equivalent to the equilibrium state that has the same extensive variables, i.e., an equal probability holds for microscopic states in nonequilibrium. We demonstrate an application of this method to the critical relaxation of the two-dimensional Potts model by Monte Carlo simulations. While the one-variable description, which is adequate for equilibrium, yields relaxation dynamics that are very fast, the redundant two-variable description well reproduces the true dynamics quantitatively. These results suggest that some class of the nonequilibrium state can be described with a small extension of degrees of freedom, which may lead to an alternative way to understand nonequilibrium phenomena. © 2012 American Physical Society.

  14. Practical modeling approaches for geological storage of carbon dioxide.

    Science.gov (United States)

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  15. Approaches to surface complexation modeling of Uranium(VI) adsorption on aquifer sediments

    Science.gov (United States)

    Davis, J.A.; Meece, D.E.; Kohler, M.; Curtis, G.P.

    2004-01-01

    Uranium(VI) adsorption onto aquifer sediments was studied in batch experiments as a function of pH and U(VI) and dissolved carbonate concentrations in artificial groundwater solutions. The sediments were collected from an alluvial aquifer at a location upgradient of contamination from a former uranium mill operation at Naturita, Colorado (USA). The ranges of aqueous chemical conditions used in the U(VI) adsorption experiments (pH 6.9 to 7.9; U(VI) concentration 2.5 ?? 10-8 to 1 ?? 10-5 M; partial pressure of carbon dioxide gas 0.05 to 6.8%) were based on the spatial variation in chemical conditions observed in 1999-2000 in the Naturita alluvial aquifer. The major minerals in the sediments were quartz, feldspars, and calcite, with minor amounts of magnetite and clay minerals. Quartz grains commonly exhibited coatings that were greater than 10 nm in thickness and composed of an illite-smectite clay with occluded ferrihydrite and goethite nanoparticles. Chemical extractions of quartz grains removed from the sediments were used to estimate the masses of iron and aluminum present in the coatings. Various surface complexation modeling approaches were compared in terms of the ability to describe the U(VI) experimental data and the data requirements for model application to the sediments. Published models for U(VI) adsorption on reference minerals were applied to predict U(VI) adsorption based on assumptions about the sediment surface composition and physical properties (e.g., surface area and electrical double layer). Predictions from these models were highly variable, with results overpredicting or underpredicting the experimental data, depending on the assumptions used to apply the model. Although the models for reference minerals are supported by detailed experimental studies (and in ideal cases, surface spectroscopy), the results suggest that errors are caused in applying the models directly to the sediments by uncertain knowledge of: 1) the proportion and types of

  16. The feminist/emotionally focused therapy practice model: an integrated approach for couple therapy.

    Science.gov (United States)

    Vatcher, C A; Bogo, M

    2001-01-01

    Emotionally focused therapy (EFT) is a well-developed, empirically tested practice model for couple therapy that integrates systems, experiential, and attachment theories. Feminist family therapy theory has provided a critique of biased assumptions about gender at play in traditional family therapy practice and the historical absence of discussions of power in family therapy theory. This article presents an integrated feminist/EFT practice model for use in couple therapy, using a case from practice to illustrate key concepts. Broadly, the integrated model addresses gender roles and individual emotional experience using a systemic framework for understanding couple interaction. The model provides practitioners with a sophisticated, comprehensive, and relevant practice approach for working with the issues and challenges emerging for contemporary heterosexual couples.

  17. The CAPM approach to materiality

    OpenAIRE

    Hadjieftychiou, Aristarchos

    1993-01-01

    Materiality is a pervasive accounting concept that has defied a precise quantitative definition. The Capital Asset Pricing Model (CAPM) approach to materiality provides a means for determining the limits that bound materiality. Also, the approach makes it possible to locate the point estimate within these limits based on certain assumptions.

  18. Modeling and Analyzing Real-Time Multiprocessor Systems

    NARCIS (Netherlands)

    Wiggers, M.H.; Thiele, Lothar; Lee, Edward A.; Schlieker, Simon; Bekooij, Marco Jan Gerrit

    2010-01-01

    Researchers have proposed approaches to verify that real-time multiprocessor systems meet their timeliness constraints. These approaches make assumptions on the model of computation, the load placed on the multiprocessor system, and the faults that can arise. This heterogeneous set of assumptions

  19. Measuring Productivity Change without Neoclassical Assumptions: A Conceptual Analysis

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2008-01-01

    textabstractThe measurement of productivity change (or difference) is usually based on models that make use of strong assumptions such as competitive behaviour and constant returns to scale. This survey discusses the basics of productivity measurement and shows that one can dispense with most if not

  20. Causal Models for Mediation Analysis: An Introduction to Structural Mean Models.

    Science.gov (United States)

    Zheng, Cheng; Atkins, David C; Zhou, Xiao-Hua; Rhew, Isaac C

    2015-01-01

    Mediation analyses are critical to understanding why behavioral interventions work. To yield a causal interpretation, common mediation approaches must make an assumption of "sequential ignorability." The current article describes an alternative approach to causal mediation called structural mean models (SMMs). A specific SMM called a rank-preserving model (RPM) is introduced in the context of an applied example. Particular attention is given to the assumptions of both approaches to mediation. Applying both mediation approaches to the college student drinking data yield notable differences in the magnitude of effects. Simulated examples reveal instances in which the traditional approach can yield strongly biased results, whereas the RPM approach remains unbiased in these cases. At the same time, the RPM approach has its own assumptions that must be met for correct inference, such as the existence of a covariate that strongly moderates the effect of the intervention on the mediator and no unmeasured confounders that also serve as a moderator of the effect of the intervention or the mediator on the outcome. The RPM approach to mediation offers an alternative way to perform mediation analysis when there may be unmeasured confounders.

  1. Academic Achievement and Behavioral Health among Asian American and African American Adolescents: Testing the Model Minority and Inferior Minority Assumptions

    Science.gov (United States)

    Whaley, Arthur L.; Noel, La Tonya

    2013-01-01

    The present study tested the model minority and inferior minority assumptions by examining the relationship between academic performance and measures of behavioral health in a subsample of 3,008 (22%) participants in a nationally representative, multicultural sample of 13,601 students in the 2001 Youth Risk Behavioral Survey, comparing Asian…

  2. Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions

    Science.gov (United States)

    Wang, XiaoCong; Liu, YiMin; Bao, Qing

    2016-01-01

    Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and

  3. Forecasting Value-at-Risk under Different Distributional Assumptions

    Directory of Open Access Journals (Sweden)

    Manuela Braione

    2016-01-01

    Full Text Available Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. These features must be taken into account to produce accurate forecasts of Value-at-Risk (VaR. We provide a comprehensive look at the problem by considering the impact that different distributional assumptions have on the accuracy of both univariate and multivariate GARCH models in out-of-sample VaR prediction. The set of analyzed distributions comprises the normal, Student, Multivariate Exponential Power and their corresponding skewed counterparts. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures used to rank the different specifications. The results show the importance of allowing for heavy-tails and skewness in the distributional assumption with the skew-Student outperforming the others across all tests and confidence levels.

  4. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  5. Graphene growth process modeling: a physical-statistical approach

    Science.gov (United States)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  6. The basic approach to age-structured population dynamics models, methods and numerics

    CERN Document Server

    Iannelli, Mimmo

    2017-01-01

    This book provides an introduction to age-structured population modeling which emphasises the connection between mathematical theory and underlying biological assumptions. Through the rigorous development of the linear theory and the nonlinear theory alongside numerics, the authors explore classical equations that describe the dynamics of certain ecological systems. Modeling aspects are discussed to show how relevant problems in the fields of demography, ecology, and epidemiology can be formulated and treated within the theory. In particular, the book presents extensions of age-structured modelling to the spread of diseases and epidemics while also addressing the issue of regularity of solutions, the asymptotic behaviour of solutions, and numerical approximation. With sections on transmission models, non-autonomous models and global dynamics, this book fills a gap in the literature on theoretical population dynamics. The Basic Approach to Age-Structured Population Dynamics will appeal to graduate students an...

  7. Positive Mathematical Programming Approaches – Recent Developments in Literature and Applied Modelling

    Directory of Open Access Journals (Sweden)

    Thomas Heckelei

    2012-05-01

    Full Text Available This paper reviews and discusses the more recent literature and application of Positive Mathematical Programming in the context of agricultural supply models. Specifically, advances in the empirical foundation of parameter specifications as well as the economic rationalisation of PMP models – both criticized in earlier reviews – are investigated. Moreover, the paper provides an overview on a larger set of models with regular/repeated policy application that apply variants of PMP. Results show that most applications today avoid arbitrary parameter specifications and rely on exogenous information on supply responses to calibrate model parameters. However, only few approaches use multiple observations to estimate parameters, which is likely due to the still considerable technical challenges associated with it. Equally, we found only limited reflection on the behavioral or technological assumptions that could rationalise the PMP model structure while still keeping the model’s advantages.

  8. Diagnosing Diagnostic Models: From Von Neumann's Elephant to Model Equivalencies and Network Psychometrics

    Science.gov (United States)

    von Davier, Matthias

    2018-01-01

    This article critically reviews how diagnostic models have been conceptualized and how they compare to other approaches used in educational measurement. In particular, certain assumptions that have been taken for granted and used as defining characteristics of diagnostic models are reviewed and it is questioned whether these assumptions are the…

  9. PFP issues/assumptions development and management planning guide

    International Nuclear Information System (INIS)

    SINCLAIR, J.C.

    1999-01-01

    The PFP Issues/Assumptions Development and Management Planning Guide presents the strategy and process used for the identification, allocation, and maintenance of an Issues/Assumptions Management List for the Plutonium Finishing Plant (PFP) integrated project baseline. Revisions to this document will include, as attachments, the most recent version of the Issues/Assumptions Management List, both open and current issues/assumptions (Appendix A), and closed or historical issues/assumptions (Appendix B). This document is intended be a Project-owned management tool. As such, this document will periodically require revisions resulting from improvements of the information, processes, and techniques as now described. Revisions that suggest improved processes will only require PFP management approval

  10. A multi-model approach to X-ray pulsars

    Directory of Open Access Journals (Sweden)

    Schönherr G.

    2014-01-01

    Full Text Available The emission characteristics of X-ray pulsars are governed by magnetospheric accretion within the Alfvén radius, leading to a direct coupling of accretion column properties and interactions at the magnetosphere. The complexity of the physical processes governing the formation of radiation within the accreted, strongly magnetized plasma has led to several sophisticated theoretical modelling efforts over the last decade, dedicated to either the formation of the broad band continuum, the formation of cyclotron resonance scattering features (CRSFs or the formation of pulse profiles. While these individual approaches are powerful in themselves, they quickly reach their limits when aiming at a quantitative comparison to observational data. Too many fundamental parameters, describing the formation of the accretion columns and the systems’ overall geometry are unconstrained and different models are often based on different fundamental assumptions, while everything is intertwined in the observed, highly phase-dependent spectra and energy-dependent pulse profiles. To name just one example: the (phase variable line width of the CRSFs is highly dependent on the plasma temperature, the existence of B-field gradients (geometry and observation angle, parameters which, in turn, drive the continuum radiation and are driven by the overall two-pole geometry for the light bending model respectively. This renders a parallel assessment of all available spectral and timing information by a compatible across-models-approach indispensable. In a collaboration of theoreticians and observers, we have been working on a model unification project over the last years, bringing together theoretical calculations of the Comptonized continuum, Monte Carlo simulations and Radiation Transfer calculations of CRSFs as well as a General Relativity (GR light bending model for ray tracing of the incident emission pattern from both magnetic poles. The ultimate goal is to implement a

  11. Test of a simplified modeling approach for nitrogen transfer in agricultural subsurface-drained catchments

    Science.gov (United States)

    Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander

    2017-04-01

    In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that

  12. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  13. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  14. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  15. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, R.S.; Alonso, D.; McKane, A.J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the

  16. Modeling of annular two-phase flow using a unified CFD approach

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haipeng, E-mail: haipengl@kth.se; Anglart, Henryk, E-mail: henryk@kth.se

    2016-07-15

    Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.

  17. Modeling of annular two-phase flow using a unified CFD approach

    International Nuclear Information System (INIS)

    Li, Haipeng; Anglart, Henryk

    2016-01-01

    Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.

  18. A multi-objective approach to improve SWAT model calibration in alpine catchments

    Science.gov (United States)

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  19. Transportation radiological risk assessment for the programmatic environmental impact statement: An overview of methodologies, assumptions, and input parameters

    International Nuclear Information System (INIS)

    Monette, F.; Biwer, B.; LePoire, D.; Chen, S.Y.

    1994-01-01

    The U.S. Department of Energy is considering a broad range of alternatives for the future configuration of radioactive waste management at its network of facilities. Because the transportation of radioactive waste is an integral component of the management alternatives being considered, the estimated human health risks associated with both routine and accident transportation conditions must be assessed to allow a complete appraisal of the alternatives. This paper provides an overview of the technical approach being used to assess the radiological risks from the transportation of radioactive wastes. The approach presented employs the RADTRAN 4 computer code to estimate the collective population risk during routine and accident transportation conditions. Supplemental analyses are conducted using the RISKIND computer code to address areas of specific concern to individuals or population subgroups. RISKIND is used for estimating routine doses to maximally exposed individuals and for assessing the consequences of the most severe credible transportation accidents. The transportation risk assessment is designed to ensure -- through uniform and judicious selection of models, data, and assumptions -- that relative comparisons of risk among the various alternatives are meaningful. This is accomplished by uniformly applying common input parameters and assumptions to each waste type for all alternatives. The approach presented can be applied to all radioactive waste types and provides a consistent and comprehensive evaluation of transportation-related risk

  20. A Bayesian Nonparametric Approach to Factor Analysis

    DEFF Research Database (Denmark)

    Piatek, Rémi; Papaspiliopoulos, Omiros

    2018-01-01

    This paper introduces a new approach for the inference of non-Gaussian factor models based on Bayesian nonparametric methods. It relaxes the usual normality assumption on the latent factors, widely used in practice, which is too restrictive in many settings. Our approach, on the contrary, does no...

  1. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  2. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  3. Using Instrument Simulators and a Satellite Database to Evaluate Microphysical Assumptions in High-Resolution Simulations of Hurricane Rita

    Science.gov (United States)

    Hristova-Veleva, S. M.; Chao, Y.; Chau, A. H.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; Martin, J. M.; Poulsen, W. L.; Rodriguez, E.; Stiles, B. W.; Turk, J.; Vu, Q.

    2009-12-01

    Improving forecasting of hurricane intensity remains a significant challenge for the research and operational communities. Many factors determine a tropical cyclone’s intensity. Ultimately, though, intensity is dependent on the magnitude and distribution of the latent heating that accompanies the hydrometeor production during the convective process. Hence, the microphysical processes and their representation in hurricane models are of crucial importance for accurately simulating hurricane intensity and evolution. The accurate modeling of the microphysical processes becomes increasingly important when running high-resolution models that should properly reflect the convective processes in the hurricane eyewall. There are many microphysical parameterizations available today. However, evaluating their performance and selecting the most representative ones remains a challenge. Several field campaigns were focused on collecting in situ microphysical observations to help distinguish between different modeling approaches and improve on the most promising ones. However, these point measurements cannot adequately reflect the space and time correlations characteristic of the convective processes. An alternative approach to evaluating microphysical assumptions is to use multi-parameter remote sensing observations of the 3D storm structure and evolution. In doing so, we could compare modeled to retrieved geophysical parameters. The satellite retrievals, however, carry their own uncertainty. To increase the fidelity of the microphysical evaluation results, we can use instrument simulators to produce satellite observables from the model fields and compare to the observed. This presentation will illustrate how instrument simulators can be used to discriminate between different microphysical assumptions. We will compare and contrast the members of high-resolution ensemble WRF model simulations of Hurricane Rita (2005), each member reflecting different microphysical assumptions

  4. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  5. Conflicts versus analytical redundancy relations: a comparative analysis of the model based diagnosis approach from the artificial intelligence and automatic control perspectives.

    Science.gov (United States)

    Cordier, Marie-Odile; Dague, Philippe; Lévy, François; Montmain, Jacky; Staroswiecki, Marcel; Travé-Massuyès, Louise

    2004-10-01

    Two distinct and parallel research communities have been working along the lines of the model-based diagnosis approach: the fault detection and isolation (FDI) community and the diagnostic (DX) community that have evolved in the fields of automatic control and artificial intelligence, respectively. This paper clarifies and links the concepts and assumptions that underlie the FDI analytical redundancy approach and the DX consistency-based logical approach. A formal framework is proposed in order to compare the two approaches and the theoretical proof of their equivalence together with the necessary and sufficient conditions is provided.

  6. Oil production, oil prices, and macroeconomic adjustment under different wage assumptions

    International Nuclear Information System (INIS)

    Harvie, C.; Maleka, P.T.

    1992-01-01

    In a previous paper one of the authors developed a simple model to try to identify the possible macroeconomic adjustment processes arising in an economy experiencing a temporary period of oil production, under alternative wage adjustment assumptions, namely nominal and real wage rigidity. Certain assumptions were made regarding the characteristics of actual production, the permanent revenues generated from that oil production, and the net exports/imports of oil. The role of the price of oil, and possible changes in that price was essentially ignored. Here we attempt to incorporate the price of oil, as well as changes in that price, in conjunction with the production of oil, the objective being to identify the contribution which the price of oil, and changes in it, make to the adjustment process itself. The emphasis in this paper is not given to a mathematical derivation and analysis of the model's dynamics of adjustment or its comparative statics, but rather to the derivation of simulation results from the model, for a specific assumed case, using a numerical algorithm program, conducive to the type of theoretical framework utilized here. The results presented suggest that although the adjustment profiles of the macroeconomic variables of interest, for either wage adjustment assumption, remain fundamentally the same, the magnitude of these adjustments is increased. Hence to derive a more accurate picture of the dimensions of adjustment of these macroeconomic variables, it is essential to include the price of oil as well as changes in that price. (Author)

  7. Large-scale analyses of synonymous substitution rates can be sensitive to assumptions about the process of mutation.

    Science.gov (United States)

    Aris-Brosou, Stéphane; Bielawski, Joseph P

    2006-08-15

    A popular approach to examine the roles of mutation and selection in the evolution of genomes has been to consider the relationship between codon bias and synonymous rates of molecular evolution. A significant relationship between these two quantities is taken to indicate the action of weak selection on substitutions among synonymous codons. The neutral theory predicts that the rate of evolution is inversely related to the level of functional constraint. Therefore, selection against the use of non-preferred codons among those coding for the same amino acid should result in lower rates of synonymous substitution as compared with sites not subject to such selection pressures. However, reliably measuring the extent of such a relationship is problematic, as estimates of synonymous rates are sensitive to our assumptions about the process of molecular evolution. Previous studies showed the importance of accounting for unequal codon frequencies, in particular when synonymous codon usage is highly biased. Yet, unequal codon frequencies can be modeled in different ways, making different assumptions about the mutation process. Here we conduct a simulation study to evaluate two different ways of modeling uneven codon frequencies and show that both model parameterizations can have a dramatic impact on rate estimates and affect biological conclusions about genome evolution. We reanalyze three large data sets to demonstrate the relevance of our results to empirical data analysis.

  8. Controversies in psychotherapy research: epistemic differences in assumptions about human psychology.

    Science.gov (United States)

    Shean, Glenn D

    2013-01-01

    It is the thesis of this paper that differences in philosophical assumptions about the subject matter and treatment methods of psychotherapy have contributed to disagreements about the external validity of empirically supported therapies (ESTs). These differences are evident in the theories that are the basis for both the design and interpretation of recent psychotherapy efficacy studies. The natural science model, as applied to psychotherapy outcome research, transforms the constitutive features of the study subject in a reciprocal manner so that problems, treatments, and indicators of effectiveness are limited to what can be directly observed. Meaning-based approaches to therapy emphasize processes and changes that do not lend themselves to experimental study. Hermeneutic philosophy provides a supplemental model to establishing validity in those instances where outcome indicators do not lend themselves to direct observation and measurement and require "deep" interpretation. Hermeneutics allows for a broadening of psychological study that allows one to establish a form of validity that is applicable when constructs do not refer to things that literally "exist" in nature. From a hermeneutic perspective the changes that occur in meaning-based therapies must be understood and evaluated on the manner in which they are applied to new situations, the logical ordering and harmony of the parts with the theoretical whole, and the capability of convincing experts and patients that the interpretation can stand up against other ways of understanding. Adoption of this approach often is necessary to competently evaluate the effectiveness of meaning-based therapies.

  9. Data-driven smooth tests of the proportional hazards assumption

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2007-01-01

    Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007

  10. A phasor approach analysis of multiphoton FLIM measurements of three-dimensional cell culture models

    Science.gov (United States)

    Lakner, P. H.; Möller, Y.; Olayioye, M. A.; Brucker, S. Y.; Schenke-Layland, K.; Monaghan, M. G.

    2016-03-01

    Fluorescence lifetime imaging microscopy (FLIM) is a useful approach to obtain information regarding the endogenous fluorophores present in biological samples. The concise evaluation of FLIM data requires the use of robust mathematical algorithms. In this study, we developed a user-friendly phasor approach for analyzing FLIM data and applied this method on three-dimensional (3D) Caco-2 models of polarized epithelial luminal cysts in a supporting extracellular matrix environment. These Caco-2 based models were treated with epidermal growth factor (EGF), to stimulate proliferation in order to determine if FLIM could detect such a change in cell behavior. Autofluorescence from nicotinamide adenine dinucleotide (phosphate) (NAD(P)H) in luminal Caco-2 cysts was stimulated by 2-photon laser excitation. Using a phasor approach, the lifetimes of involved fluorophores and their contribution were calculated with fewer initial assumptions when compared to multiexponential decay fitting. The phasor approach simplified FLIM data analysis, making it an interesting tool for non-experts in numerical data analysis. We observed that an increased proliferation stimulated by EGF led to a significant shift in fluorescence lifetime and a significant alteration of the phasor data shape. Our data demonstrates that multiphoton FLIM analysis with the phasor approach is a suitable method for the non-invasive analysis of 3D in vitro cell culture models qualifying this method for monitoring basic cellular features and the effect of external factors.

  11. Behavioural assumptions in labour economics: Analysing social security reforms and labour market transitions

    OpenAIRE

    van Huizen, T.M.

    2012-01-01

    The aim of this dissertation is to test behavioural assumptions in labour economics models and thereby improve our understanding of labour market behaviour. The assumptions under scrutiny in this study are derived from an analysis of recent influential policy proposals: the introduction of savings schemes in the system of social security. A central question is how this reform will affect labour market incentives and behaviour. Part I (Chapter 2 and 3) evaluates savings schemes. Chapter 2 exam...

  12. Technical note: Evaluation of the simultaneous measurements of mesospheric OH, HO2, and O3 under a photochemical equilibrium assumption - a statistical approach

    Science.gov (United States)

    Kulikov, Mikhail Y.; Nechaev, Anton A.; Belikovich, Mikhail V.; Ermakova, Tatiana S.; Feigin, Alexander M.

    2018-05-01

    This Technical Note presents a statistical approach to evaluating simultaneous measurements of several atmospheric components under the assumption of photochemical equilibrium. We consider simultaneous measurements of OH, HO2, and O3 at the altitudes of the mesosphere as a specific example and their daytime photochemical equilibrium as an evaluating relationship. A simplified algebraic equation relating local concentrations of these components in the 50-100 km altitude range has been derived. The parameters of the equation are temperature, neutral density, local zenith angle, and the rates of eight reactions. We have performed a one-year simulation of the mesosphere and lower thermosphere using a 3-D chemical-transport model. The simulation shows that the discrepancy between the calculated evolution of the components and the equilibrium value given by the equation does not exceed 3-4 % in the full range of altitudes independent of season or latitude. We have developed a statistical Bayesian evaluation technique for simultaneous measurements of OH, HO2, and O3 based on the equilibrium equation taking into account the measurement error. The first results of the application of the technique to MLS/Aura data (Microwave Limb Sounder) are presented in this Technical Note. It has been found that the satellite data of the HO2 distribution regularly demonstrate lower altitudes of this component's mesospheric maximum. This has also been confirmed by model HO2 distributions and comparison with offline retrieval of HO2 from the daily zonal means MLS radiance.

  13. Fun with maths: exploring implications of mathematical models for malaria eradication.

    Science.gov (United States)

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A

    2014-12-11

    Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.

  14. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    Science.gov (United States)

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  15. Underlying assumptions and core beliefs in anorexia nervosa and dieting.

    Science.gov (United States)

    Cooper, M; Turner, H

    2000-06-01

    To investigate assumptions and beliefs in anorexia nervosa and dieting. The Eating Disorder Belief Questionnaire (EDBQ), was administered to patients with anorexia nervosa, dieters and female controls. The patients scored more highly than the other two groups on assumptions about weight and shape, assumptions about eating and negative self-beliefs. The dieters scored more highly than the female controls on assumptions about weight and shape. The cognitive content of anorexia nervosa (both assumptions and negative self-beliefs) differs from that found in dieting. Assumptions about weight and shape may also distinguish dieters from female controls.

  16. Assumptions of the primordial spectrum and cosmological parameter estimation

    International Nuclear Information System (INIS)

    Shafieloo, Arman; Souradeep, Tarun

    2011-01-01

    The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. (paper)

  17. The effects of behavioral and structural assumptions in artificial stock market

    Science.gov (United States)

    Liu, Xinghua; Gregor, Shirley; Yang, Jianmei

    2008-04-01

    Recent literature has developed the conjecture that important statistical features of stock price series, such as the fat tails phenomenon, may depend mainly on the market microstructure. This conjecture motivated us to investigate the roles of both the market microstructure and agent behavior with respect to high-frequency returns and daily returns. We developed two simple models to investigate this issue. The first one is a stochastic model with a clearing house microstructure and a population of zero-intelligence agents. The second one has more behavioral assumptions based on Minority Game and also has a clearing house microstructure. With the first model we found that a characteristic of the clearing house microstructure, namely the clearing frequency, can explain fat tail, excess volatility and autocorrelation phenomena of high-frequency returns. However, this feature does not cause the same phenomena in daily returns. So the Stylized Facts of daily returns depend mainly on the agents’ behavior. With the second model we investigated the effects of behavioral assumptions on daily returns. Our study implicates that the aspects which are responsible for generating the stylized facts of high-frequency returns and daily returns are different.

  18. Polarized BRDF for coatings based on three-component assumption

    Science.gov (United States)

    Liu, Hong; Zhu, Jingping; Wang, Kai; Xu, Rong

    2017-02-01

    A pBRDF(polarized bidirectional reflection distribution function) model for coatings is given based on three-component reflection assumption in order to improve the polarized scattering simulation capability for space objects. In this model, the specular reflection is given based on microfacet theory, the multiple reflection and volume scattering are given separately according to experimental results. The polarization of specular reflection is considered from Fresnel's law, and both multiple reflection and volume scattering are assumed depolarized. Simulation and measurement results of two satellite coating samples SR107 and S781 are given to validate that the pBRDF modeling accuracy can be significantly improved by the three-component model given in this paper.

  19. Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

    International Nuclear Information System (INIS)

    Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David

    2002-01-01

    Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model

  20. Bridging analytical approaches for low-carbon transitions

    Science.gov (United States)

    Geels, Frank W.; Berkhout, Frans; van Vuuren, Detlef P.

    2016-06-01

    Low-carbon transitions are long-term multi-faceted processes. Although integrated assessment models have many strengths for analysing such transitions, their mathematical representation requires a simplification of the causes, dynamics and scope of such societal transformations. We suggest that integrated assessment model-based analysis should be complemented with insights from socio-technical transition analysis and practice-based action research. We discuss the underlying assumptions, strengths and weaknesses of these three analytical approaches. We argue that full integration of these approaches is not feasible, because of foundational differences in philosophies of science and ontological assumptions. Instead, we suggest that bridging, based on sequential and interactive articulation of different approaches, may generate a more comprehensive and useful chain of assessments to support policy formation and action. We also show how these approaches address knowledge needs of different policymakers (international, national and local), relate to different dimensions of policy processes and speak to different policy-relevant criteria such as cost-effectiveness, socio-political feasibility, social acceptance and legitimacy, and flexibility. A more differentiated set of analytical approaches thus enables a more differentiated approach to climate policy making.

  1. Testing the rationality assumption using a design difference in the TV game show 'Jeopardy'

    OpenAIRE

    Sjögren Lindquist, Gabriella; Säve-Söderbergh, Jenny

    2006-01-01

    Abstract This paper empirically investigates the rationality assumption commonly applied in economic modeling by exploiting a design difference in the game-show Jeopardy between the US and Sweden. In particular we address the assumption of individuals’ capabilities to process complex mathematical problems to find optimal strategies. The vital difference is that US contestants are given explicit information before they act, while Swedish contestants individually need to calculate the same info...

  2. Formal verification of dynamic hybrid systems: a NuSMV-based model checking approach

    Directory of Open Access Journals (Sweden)

    Xu Zhi

    2018-01-01

    Full Text Available Software security is an important and challenging research topic in developing dynamic hybrid embedded software systems. Ensuring the correct behavior of these systems is particularly difficult due to the interactions between the continuous subsystem and the discrete subsystem. Currently available security analysis methods for system risks have been limited, as they rely on manual inspections of the individual subsystems under simplifying assumptions. To improve this situation, a new approach is proposed that is based on the symbolic model checking tool NuSMV. A dual PID system is used as an example system, for which the logical part and the computational part of the system are modeled in a unified manner. Constraints are constructed on the controlled object, and a counter-example path is ultimately generated, indicating that the hybrid system can be analyzed by the model checking tool.

  3. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    We propose a class of finite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the ...

  4. HYPROLOG: A New Logic Programming Language with Assumptions and Abduction

    DEFF Research Database (Denmark)

    Christiansen, Henning; Dahl, Veronica

    2005-01-01

    We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars. The lan......We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars....... The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...

  5. Teaching and Learning Science in the 21st Century: Challenging Critical Assumptions in Post-Secondary Science

    Directory of Open Access Journals (Sweden)

    Amanda L. Glaze

    2018-01-01

    Full Text Available It is widely agreed upon that the goal of science education is building a scientifically literate society. Although there are a range of definitions for science literacy, most involve an ability to problem solve, make evidence-based decisions, and evaluate information in a manner that is logical. Unfortunately, science literacy appears to be an area where we struggle across levels of study, including with students who are majoring in the sciences in university settings. One reason for this problem is that we have opted to continue to approach teaching science in a way that fails to consider the critical assumptions that faculties in the sciences bring into the classroom. These assumptions include expectations of what students should know before entering given courses, whose responsibility it is to ensure that students entering courses understand basic scientific concepts, the roles of researchers and teachers, and approaches to teaching at the university level. Acknowledging these assumptions and the potential for action to shift our teaching and thinking about post-secondary education represents a transformative area in science literacy and preparation for the future of science as a field.

  6. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    Science.gov (United States)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  7. Quantum information versus black hole physics: deep firewalls from narrow assumptions.

    Science.gov (United States)

    Braunstein, Samuel L; Pirandola, Stefano

    2018-07-13

    The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes 'all the way down' in contrast with earlier work describing only a structure at the horizon.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  8. Quantum information versus black hole physics: deep firewalls from narrow assumptions

    Science.gov (United States)

    Braunstein, Samuel L.; Pirandola, Stefano

    2018-07-01

    The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes `all the way down' in contrast with earlier work describing only a structure at the horizon. This article is part of a discussion meeting issue `Foundations of quantum mechanics and their impact on contemporary society'.

  9. ψ -ontology result without the Cartesian product assumption

    Science.gov (United States)

    Myrvold, Wayne C.

    2018-05-01

    We introduce a weakening of the preparation independence postulate of Pusey et al. [Nat. Phys. 8, 475 (2012), 10.1038/nphys2309] that does not presuppose that the space of ontic states resulting from a product-state preparation can be represented by the Cartesian product of subsystem state spaces. On the basis of this weakened assumption, it is shown that, in any model that reproduces the quantum probabilities, any pair of pure quantum states |ψ >,|ϕ > with ≤1 /√{2 } must be ontologically distinct.

  10. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    Science.gov (United States)

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  11. Assumptions for the Annual Energy Outlook 1993

    International Nuclear Information System (INIS)

    1993-01-01

    This report is an auxiliary document to the Annual Energy Outlook 1993 (AEO) (DOE/EIA-0383(93)). It presents a detailed discussion of the assumptions underlying the forecasts in the AEO. The energy modeling system is an economic equilibrium system, with component demand modules representing end-use energy consumption by major end-use sector. Another set of modules represents petroleum, natural gas, coal, and electricity supply patterns and pricing. A separate module generates annual forecasts of important macroeconomic and industrial output variables. Interactions among these components of energy markets generate projections of prices and quantities for which energy supply equals energy demand. This equilibrium modeling system is referred to as the Intermediate Future Forecasting System (IFFS). The supply models in IFFS for oil, coal, natural gas, and electricity determine supply and price for each fuel depending upon consumption levels, while the demand models determine consumption depending upon end-use price. IFFS solves for market equilibrium for each fuel by balancing supply and demand to produce an energy balance in each forecast year

  12. An optical flow algorithm based on gradient constancy assumption for PIV image processing

    International Nuclear Information System (INIS)

    Zhong, Qianglong; Yang, Hua; Yin, Zhouping

    2017-01-01

    Particle image velocimetry (PIV) has matured as a flow measurement technique. It enables the description of the instantaneous velocity field of the flow by analyzing the particle motion obtained from digitally recorded images. Correlation based PIV evaluation technique is widely used because of its good accuracy and robustness. Although very successful, correlation PIV technique has some weakness which can be avoided by optical flow based PIV algorithms. At present, most of the optical flow methods applied to PIV are based on brightness constancy assumption. However, some factors of flow imaging technology and the nature property of the fluids make the brightness constancy assumption less appropriate in real PIV cases. In this paper, an implementation of a 2D optical flow algorithm (GCOF) based on gradient constancy assumption is introduced. The proposed GCOF assumes the edges of the illuminated PIV particles are constant during motion. It comprises two terms: a combined local-global gradient data term and a first-order divergence and vorticity smooth term. The approach can provide accurate dense motion fields. The approach are tested on synthetic images and on two experimental flows. The comparison of GCOF with other optical flow algorithms indicates the proposed method is more accurate especially in conditions of illumination variation. The comparison of GCOF with correlation PIV technique shows that the proposed GCOF has advantages on preserving small divergence and vorticity structures of the motion field and getting less outliers. As a consequence, the GCOF acquire a more accurate and better topological description of the turbulent flow. (paper)

  13. A new approach for modeling dry deposition velocity of particles

    Science.gov (United States)

    Giardina, M.; Buffa, P.

    2018-05-01

    The dry deposition process is recognized as an important pathway among the various removal processes of pollutants in the atmosphere. In this field, there are several models reported in the literature useful to predict the dry deposition velocity of particles of different diameters but many of them are not capable of representing dry deposition phenomena for several categories of pollutants and deposition surfaces. Moreover, their applications is valid for specific conditions and if the data in that application meet all of the assumptions required of the data used to define the model. In this paper a new dry deposition velocity model based on an electrical analogy schema is proposed to overcome the above issues. The dry deposition velocity is evaluated by assuming that the resistances that affect the particle flux in the Quasi-Laminar Sub-layers can be combined to take into account local features of the mutual influence of inertial impact processes and the turbulent one. Comparisons with the experimental data from literature indicate that the proposed model allows to capture with good agreement the main dry deposition phenomena for the examined environmental conditions and deposition surfaces to be determined. The proposed approach could be easily implemented within atmospheric dispersion modeling codes and efficiently addressing different deposition surfaces for several particle pollution.

  14. CFD modeling of two-stage ignition in a rapid compression machine: Assessment of zero-dimensional approach

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Gaurav [Department of Mechanical Engineering, The University of Akron, Akron, OH 44325 (United States); Raju, Mandhapati P. [General Motor R and D Tech Center, Warren, MI 48090 (United States); Sung, Chih-Jen [Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269 (United States)

    2010-07-15

    In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluid dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)

  15. Early Validation of Automation Plant Control Software using Simulation Based on Assumption Modeling and Validation Use Cases

    Directory of Open Access Journals (Sweden)

    Veronika Brandstetter

    2015-10-01

    Full Text Available In automation plants, technical processes must be conducted in a way that products, substances, or services are produced reliably, with sufficient quality and with minimal strain on resources. A key driver in conducting these processes is the automation plant’s control software, which controls the technical plant components and thereby affects the physical, chemical, and mechanical processes that take place in automation plants. To this end, the control software of an automation plant must adhere to strict process requirements arising from the technical processes, and from the physical plant design. Currently, the validation of the control software often starts late in the engineering process in many cases – once the automation plant is almost completely constructed. However, as widely acknowledged, the later the control software of the automation plant is validated, the higher the effort for correcting revealed defects is, which can lead to serious budget overruns and project delays. In this article we propose an approach that allows the early validation of automation control software against the technical plant processes and assumptions about the physical plant design by means of simulation. We demonstrate the application of our approach on the example of an actual plant project from the automation industry and present it’s technical implementation

  16. Testing surrogacy assumptions: can threatened and endangered plants be grouped by biological similarity and abundances?

    Directory of Open Access Journals (Sweden)

    Judy P Che-Castaldo

    Full Text Available There is renewed interest in implementing surrogate species approaches in conservation planning due to the large number of species in need of management but limited resources and data. One type of surrogate approach involves selection of one or a few species to represent a larger group of species requiring similar management actions, so that protection and persistence of the selected species would result in conservation of the group of species. However, among the criticisms of surrogate approaches is the need to test underlying assumptions, which remain rarely examined. In this study, we tested one of the fundamental assumptions underlying use of surrogate species in recovery planning: that there exist groups of threatened and endangered species that are sufficiently similar to warrant similar management or recovery criteria. Using a comprehensive database of all plant species listed under the U.S. Endangered Species Act and tree-based random forest analysis, we found no evidence of species groups based on a set of distributional and biological traits or by abundances and patterns of decline. Our results suggested that application of surrogate approaches for endangered species recovery would be unjustified. Thus, conservation planning focused on individual species and their patterns of decline will likely be required to recover listed species.

  17. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  18. Assumptions to the model of managing knowledge workers in modern organizations

    Directory of Open Access Journals (Sweden)

    Igielski Michał

    2017-05-01

    Full Text Available Changes in the twenty-first century are faster, suddenly appear, not always desirable for the smooth functioning of the company. This is the domain of globalization, in which new events - opportunities or threats, forcing the company all the time to act. More and more things depend on the intangible assets of the undertaking, its strategic potential. Certain types of work require more knowledge, experience and independent thinking, and custom than others. Therefore in this article the author has taken up the subject of knowledge workers in contemporary organizations. The aim of the study is to attempt to create assumptions about the knowledge management model in these organizations, based on literature analysis and empirical research. In this regard, the author describes the contemporary conditions of employee management and the skills and competences of knowledge workers. In addition, he conducted research (2016 in 100 medium enterprises in the province of Pomerania, using a tool in the form of a questionnaire and an interview. Already at the beginning of the analysis of the data collected, it turned out that for all employers it should be important to discern differences in the creation of a new category of managers who have knowledge useful for the functioning of the company. Moreover, with the experience gained in a similar research process previously carried out in companies from the Baltic Sea Region, the author knew about the positive influence of these people on creating new solutions or improving the quality of already existing products or services.

  19. Modeling intelligent adversaries for terrorism risk assessment: some necessary conditions for adversary models.

    Science.gov (United States)

    Guikema, Seth

    2012-07-01

    Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.

  20. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    Science.gov (United States)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  1. 40 CFR 265.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ..., STORAGE, AND DISPOSAL FACILITIES Financial Requirements § 265.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  2. 40 CFR 144.66 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) UNDERGROUND INJECTION CONTROL PROGRAM Financial Responsibility: Class I Hazardous Waste Injection Wells § 144.66 State assumption of responsibility. (a) If a State either assumes legal... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State assumption of responsibility...

  3. 40 CFR 264.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... FACILITIES Financial Requirements § 264.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure, post-closure care, or... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  4. 40 CFR 261.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... Excluded Hazardous Secondary Materials § 261.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure or liability... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  5. 40 CFR 267.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... STANDARDIZED PERMIT Financial Requirements § 267.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure care or liability... 40 Protection of Environment 26 2010-07-01 2010-07-01 false State assumption of responsibility...

  6. Overview of the FEP analysis approach to model development

    International Nuclear Information System (INIS)

    Bailey, L.

    1998-01-01

    This report heads a suite of documents describing the Nirex model development programme. The programme is designed to provide a clear audit trail from the identification of significant features, events and processes (FEPs) to the models and modelling processes employed within a detailed safety assessment. A five stage approach has been adopted, which provides a systematic framework for addressing uncertainty and for the documentation of all modelling decisions and assumptions. The five stages are as follows: Stage 1: EP Analysis - compilation and structuring of a FEP database; Stage 2: Scenario and Conceptual Model Development; Stage 3: Mathematical Model Development; Stage 4: Software Development; Stage 5: confidence Building. This report describes the development and structuring of a FEP database as a Master Directed Diagram (MDD) and explains how this may be used to identify different modelling scenarios, based upon the identification of scenario -defining FEPs. The methodology describes how the possible evolution of a repository system can be addressed in terms of a base scenario, a broad and reasonable representation of the 'natural' evolution of the system, and a number of variant scenarios, representing the effects of probabilistic events and processes. The MDD has been used to identify conceptual models to represent the base scenario and the interactions between these conceptual models have been systematically reviewed using a matrix diagram technique. This has led to the identification of modelling requirements for the base scenario, against which existing assessment software capabilities have been reviewed. A mechanism for combining probabilistic scenario-defining FEPs to construct multi-FEP variant scenarios has been proposed and trialled using the concept of a 'timeline', a defined sequence of events, from which consequences can be assessed. An iterative approach, based on conservative modelling principles, has been proposed for the evaluation of

  7. Investigation of assumptions underlying current safety guidelines on EM-induced nerve stimulation

    Science.gov (United States)

    Neufeld, Esra; Vogiatzis Oikonomidis, Ioannis; Iacono, Maria Ida; Angelone, Leonardo M.; Kainz, Wolfgang; Kuster, Niels

    2016-06-01

    An intricate network of a variety of nerves is embedded within the complex anatomy of the human body. Although nerves are shielded from unwanted excitation, they can still be stimulated by external electromagnetic sources that induce strongly non-uniform field distributions. Current exposure safety standards designed to limit unwanted nerve stimulation are based on a series of explicit and implicit assumptions and simplifications. This paper demonstrates the applicability of functionalized anatomical phantoms with integrated coupled electromagnetic and neuronal dynamics solvers for investigating the impact of magnetic resonance exposure on nerve excitation within the full complexity of the human anatomy. The impact of neuronal dynamics models, temperature and local hot-spots, nerve trajectory and potential smoothing, anatomical inhomogeneity, and pulse duration on nerve stimulation was evaluated. As a result, multiple assumptions underlying current safety standards are questioned. It is demonstrated that coupled EM-neuronal dynamics modeling involving realistic anatomies is valuable to establish conservative safety criteria.

  8. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    NARCIS (Netherlands)

    Ernst, Anja F.; Albers, Casper J.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated

  9. A formal statistical approach to representing uncertainty in rainfall-runoff modelling with focus on residual analysis and probabilistic output evaluation - Distinguishing simulation and prediction

    DEFF Research Database (Denmark)

    Breinholt, Anders; Møller, Jan Kloppenborg; Madsen, Henrik

    2012-01-01

    While there seems to be consensus that hydrological model outputs should be accompanied with an uncertainty estimate the appropriate method for uncertainty estimation is not agreed upon and a debate is ongoing between advocators of formal statistical methods who consider errors as stochastic...... and GLUE advocators who consider errors as epistemic, arguing that the basis of formal statistical approaches that requires the residuals to be stationary and conform to a statistical distribution is unrealistic. In this paper we take a formal frequentist approach to parameter estimation and uncertainty...... necessary but the statistical assumptions were nevertheless not 100% justified. The residual analysis showed that significant autocorrelation was present for all simulation models. We believe users of formal approaches to uncertainty evaluation within hydrology and within environmental modelling in general...

  10. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  11. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    Science.gov (United States)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative

  12. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    Science.gov (United States)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  13. UNCERTAINTY IN NEOCLASSICAL AND KEYNESIAN THEORETICAL APPROACHES: A BEHAVIOURAL PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Sinziana BALTATESCU

    2015-11-01

    Full Text Available The ”mainstream” neoclassical assumptions about human economic behavior are currently challenged by both behavioural researches on human behaviour and other theoretical approaches which, in the context of the recent economic and financial crisis find arguments to reinforce their theoretical statements. The neoclassical “perfect rationality” assumption is most criticized and provokes the mainstream theoretical approach to efforts of revisiting the theoretical framework in order to re-state the economic models validity. Uncertainty seems, in this context, to be the concept that allows other theoretical approaches to take into consideration a more realistic individual from the psychological perspective. This paper is trying to present a comparison between the neoclassical and Keynesian approach of the uncertainty, considering the behavioural arguments and challenges addressed to the mainstream theory.

  14. Thermal radiation transfer calculations in combustion fields using the SLW model coupled with a modified reference approach

    Science.gov (United States)

    Darbandi, Masoud; Abrar, Bagher

    2018-01-01

    The spectral-line weighted-sum-of-gray-gases (SLW) model is considered as a modern global model, which can be used in predicting the thermal radiation heat transfer within the combustion fields. The past SLW model users have mostly employed the reference approach to calculate the local values of gray gases' absorption coefficient. This classical reference approach assumes that the absorption spectra of gases at different thermodynamic conditions are scalable with the absorption spectrum of gas at a reference thermodynamic state in the domain. However, this assumption cannot be reasonable in combustion fields, where the gas temperature is very different from the reference temperature. Consequently, the results of SLW model incorporated with the classical reference approach, say the classical SLW method, are highly sensitive to the reference temperature magnitude in non-isothermal combustion fields. To lessen this sensitivity, the current work combines the SLW model with a modified reference approach, which is a particular one among the eight possible reference approach forms reported recently by Solovjov, et al. [DOI: 10.1016/j.jqsrt.2017.01.034, 2017]. The combination is called "modified SLW method". This work shows that the modified reference approach can provide more accurate total emissivity calculation than the classical reference approach if it is coupled with the SLW method. This would be particularly helpful for more accurate calculation of radiation transfer in highly non-isothermal combustion fields. To approve this, we use both the classical and modified SLW methods and calculate the radiation transfer in such fields. It is shown that the modified SLW method can almost eliminate the sensitivity of achieved results to the chosen reference temperature in treating highly non-isothermal combustion fields.

  15. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    Science.gov (United States)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  16. DDH-Like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2012-01-01

    We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of $\\mathbb{F}_{q}$ “in the exponent” of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring $R_f=......-Reingold style pseudorandom functions, and auxiliary input secure encryption. This can be seen as an alternative to the known family of k-LIN assumptions....

  17. A framework for the organizational assumptions underlying safety culture

    International Nuclear Information System (INIS)

    Packer, Charles

    2002-01-01

    The safety culture of the nuclear organization can be addressed at the three levels of culture proposed by Edgar Schein. The industry literature provides a great deal of insight at the artefact and espoused value levels, although as yet it remains somewhat disorganized. There is, however, an overall lack of understanding of the assumption level of safety culture. This paper describes a possible framework for conceptualizing the assumption level, suggesting that safety culture is grounded in unconscious beliefs about the nature of the safety problem, its solution and how to organize to achieve the solution. Using this framework, the organization can begin to uncover the assumptions at play in its normal operation, decisions and events and, if necessary, engage in a process to shift them towards assumptions more supportive of a strong safety culture. (author)

  18. The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.

    Science.gov (United States)

    Meindl, Peter; Johnson, Kate M; Graham, Jesse

    2016-04-01

    Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. © 2016 by the Society for Personality and Social Psychology, Inc.

  19. Surface-seeking radionuclides in the skeleton: current approach and recent developments in biokinetic modelling for humans and beagles

    International Nuclear Information System (INIS)

    Luciani, A.; Polig, E.

    2007-01-01

    In the last decade, the biokinetics of surface-seeking radionuclides in the skeleton has been the object of several studies. Investigations were carried out to determine the kinetics of plutonium and americium in the skeleton of humans and beagles. As a result of these investigations, in recent years the models presented by ICRP in Publication 67 for humans were partially revised, particularly the skeletal part. The aim of the present work is to present recent developments in the biokinetic modelling of surface-seeking radionuclides (plutonium and americium) in beagles and humans. Various assumptions and physiological interpretations of the different approaches to the biokinetic modelling of the skeleton are discussed. Current ICRP concepts and skeleton modelling of plutonium and americium in humans are compared to the latest developments in biokinetic modelling in beagles. (authors)

  20. Assessing risk from intelligent attacks: A perspective on approaches

    International Nuclear Information System (INIS)

    Guikema, Seth D.; Aven, Terje

    2010-01-01

    Assessing the uncertainties in and severity of the consequences of intelligent attacks are fundamentally different from risk assessment for accidental events and other phenomena with inherently random failures. Intelligent attacks against a system involve adaptation on the part of the adversary. The probabilities of the initiating events depend on the risk management actions taken, and they may be more difficult to assess due to high degrees of epistemic uncertainty about the motivations and future actions of adversaries. Several fundamentally different frameworks have been proposed for assessing risk from intelligent attacks. These include basing risk assessment and management on game theoretic modelling of attacker actions, using a probabilistic risk analysis (PRA) approach based on eliciting probabilities of different initiating events from appropriate experts, assessing uncertainties beyond probabilities and expected values, and ignoring the probabilities of the attacks and choosing to protect highest valued targets. In this paper we discuss and compare the fundamental assumptions that underlie each of these approaches. We then suggest a new framework that makes the fundamental assumptions underlying the approaches clear to decision makers and presents them with a suite of results from conditional risk analysis methods. Each of the conditional methods presents the risk from a specified set of fundamental assumptions, allowing the decision maker to see the impacts of these assumptions on the risk management strategies considered and to weight the different conditional results with their assessments of the relative likelihood of the different sets of assumptions.

  1. Stochastic models of the Social Security trust funds.

    Science.gov (United States)

    Burdick, Clark; Manchester, Joyce

    Each year in March, the Board of Trustees of the Social Security trust funds reports on the current and projected financial condition of the Social Security programs. Those programs, which pay monthly benefits to retired workers and their families, to the survivors of deceased workers, and to disabled workers and their families, are financed through the Old-Age, Survivors, and Disability Insurance (OASDI) Trust Funds. In their 2003 report, the Trustees present, for the first time, results from a stochastic model of the combined OASDI trust funds. Stochastic modeling is an important new tool for Social Security policy analysis and offers the promise of valuable new insights into the financial status of the OASDI trust funds and the effects of policy changes. The results presented in this article demonstrate that several stochastic models deliver broadly consistent results even though they use very different approaches and assumptions. However, they also show that the variation in trust fund outcomes differs as the approach and assumptions are varied. Which approach and assumptions are best suited for Social Security policy analysis remains an open question. Further research is needed before the promise of stochastic modeling is fully realized. For example, neither parameter uncertainty nor variability in ultimate assumption values is recognized explicitly in the analyses. Despite this caveat, stochastic modeling results are already shedding new light on the range and distribution of trust fund outcomes that might occur in the future.

  2. 'Distorted structure modelling' - a more physical approach to Rapid Distortion Theory

    International Nuclear Information System (INIS)

    Savill, A.M.

    1979-11-01

    Rapid Distortion Theory is reviewed in the light of the modern mechanistic approach to turbulent motion. The apparent failure of current models, based on this theory, to predict stress intensity ratios accurately in distorted shear flows is attributed to their oversimplistic assumptions concerning the inherent turbulence structure of such flows. A more realistic picture of this structure and the manner in which it responds to distortion is presented in terms of interactions between the mean flow and three principal types of eddies. If Rapid Distortion Theory is modified to account for this it is shown that the stress intensity ratios can be accurately predicted in three test flows. It is concluded that a computational scheme based on Rapid Distortion Theory might ultimately be capable of predicting turbulence parameters in the highly complex geometries of reactor cooling systems. (author)

  3. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    International Nuclear Information System (INIS)

    R.E. Sweeney

    2001-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  4. Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document

    International Nuclear Information System (INIS)

    Sweeney, R.

    2000-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  5. Economic modeling of sealing primary molars using a "value of information" approach.

    Science.gov (United States)

    Ney, J P; van der Goes, D N; Chi, D L

    2014-09-01

    The objective was to evaluate 2 primary molar sealant strategies for publicly insured children using an "expected value of perfect information" (EVPI) approach. We converted a 10,000-observation tooth-level cost-effectiveness simulation model comparing 2 primary molar sealant strategies - always seal (AS) and standard care (SC) - with a 1,250-observation child-level model. Costs per child per restoration or extraction averted were estimated. Opportunity losses under the AS strategy were determined for children for whom SC was the optimal choice. We determined the EVPI by multiplying mean opportunity losses by the projected incident population of publicly insured 3-year-olds in the US over 10 years with costs discounted at 2%. All analyses were conducted under assumptions of high and low intrachild correlations between at-risk teeth. The AS strategy cost $43.68 over SC (95% CI: -$5.50, $92.86) per child per restoration or extraction averted under the high intrachild correlation assumption and $15.54 (95% CI $7.86, $23.20) under the low intrachild correlation. Under high intrachild correlation, mean opportunity losses were $80.28 (95% CI: $76.39, $84.17) per child, and AS was the optimal strategy in 31% of children. Under low correlation, mean opportunity losses were $14.61 (95% CI: $12.20, $17.68) and AS was the optimal strategy in 87% of children. The EVPI was calculated at $530,813,740 and $96,578,389 (for high and low intrachild correlation, respectively), for a projected total incident population of 8,059,712 children. On average, always sealing primary molars is more effective than standard care, but widespread implementation of this preventive approach among publicly insured children would result in large opportunity losses. Additional research is needed to identify the subgroups of publicly insured children who would benefit the most from this effective and potentially cost-saving public health intervention. © International & American Associations for Dental

  6. Influence of simulation assumptions and input parameters on energy balance calculations of residential buildings

    International Nuclear Information System (INIS)

    Dodoo, Ambrose; Tettey, Uniben Yao Ayikoe; Gustavsson, Leif

    2017-01-01

    In this study, we modelled the influence of different simulation assumptions on energy balances of two variants of a residential building, comprising the building in its existing state and with energy-efficient improvements. We explored how selected parameter combinations and variations affect the energy balances of the building configurations. The selected parameters encompass outdoor microclimate, building thermal envelope and household electrical equipment including technical installations. Our modelling takes into account hourly as well as seasonal profiles of different internal heat gains. The results suggest that the impact of parameter interactions on calculated space heating of buildings is somewhat small and relatively more noticeable for an energy-efficient building in contrast to a conventional building. We find that the influence of parameters combinations is more apparent as more individual parameters are varied. The simulations show that a building's calculated space heating demand is significantly influenced by how heat gains from electrical equipment are modelled. For the analyzed building versions, calculated final energy for space heating differs by 9–14 kWh/m"2 depending on the assumed energy efficiency level for electrical equipment. The influence of electrical equipment on calculated final space heating is proportionally more significant for an energy-efficient building compared to a conventional building. This study shows the influence of different simulation assumptions and parameter combinations when varied simultaneously. - Highlights: • Energy balances are modelled for conventional and efficient variants of a building. • Influence of assumptions and parameter combinations and variations are explored. • Parameter interactions influence is apparent as more single parameters are varied. • Calculated space heating demand is notably affected by how heat gains are modelled.

  7. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  8. Tale of Two Courthouses: A Critique of the Underlying Assumptions in Chronic Disease Self-Management for Aboriginal People

    Directory of Open Access Journals (Sweden)

    Isabelle Ellis

    2009-12-01

    Full Text Available This article reviews the assumptions that underpin thecommonly implemented Chronic Disease Self-Managementmodels. Namely that there are a clear set of instructions forpatients to comply with, that all health care providers agreewith; and that the health care provider and the patient agreewith the chronic disease self-management plan that wasdeveloped as part of a consultation. These assumptions areevaluated for their validity in the remote health care context,particularly for Aboriginal people. These assumptions havebeen found to lack validity in this context, therefore analternative model to enhance chronic disease care isproposed.

  9. Sampling Assumptions in Inductive Generalization

    Science.gov (United States)

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  10. Legal assumptions for private company claim for additional (supplementary payment

    Directory of Open Access Journals (Sweden)

    Šogorov Stevan

    2011-01-01

    Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.

  11. Addressing uncertainties in the ERICA Integrated Approach

    International Nuclear Information System (INIS)

    Oughton, D.H.; Agueero, A.; Avila, R.; Brown, J.E.; Copplestone, D.; Gilek, M.

    2008-01-01

    Like any complex environmental problem, ecological risk assessment of the impacts of ionising radiation is confounded by uncertainty. At all stages, from problem formulation through to risk characterisation, the assessment is dependent on models, scenarios, assumptions and extrapolations. These include technical uncertainties related to the data used, conceptual uncertainties associated with models and scenarios, as well as social uncertainties such as economic impacts, the interpretation of legislation, and the acceptability of the assessment results to stakeholders. The ERICA Integrated Approach has been developed to allow an assessment of the risks of ionising radiation, and includes a number of methods that are intended to make the uncertainties and assumptions inherent in the assessment more transparent to users and stakeholders. Throughout its development, ERICA has recommended that assessors deal openly with the deeper dimensions of uncertainty and acknowledge that uncertainty is intrinsic to complex systems. Since the tool is based on a tiered approach, the approaches to dealing with uncertainty vary between the tiers, ranging from a simple, but highly conservative screening to a full probabilistic risk assessment including sensitivity analysis. This paper gives on overview of types of uncertainty that are manifest in ecological risk assessment and the ERICA Integrated Approach to dealing with some of these uncertainties

  12. Proposed optical test of Bell's inequalities not resting upon the fair sampling assumption

    International Nuclear Information System (INIS)

    Santos, Emilio

    2004-01-01

    Arguments are given against the fair sampling assumption, used to claim an empirical disproof of local realism. New tests are proposed, able to discriminate between quantum mechanics and a restricted, but appealing, family of local hidden-variables models. Such tests require detectors with efficiencies just above 20%

  13. Statistical Analysis of fMRI Time-Series: A Critical Review of the GLM Approach

    Directory of Open Access Journals (Sweden)

    Martin M Monti

    2011-03-01

    Full Text Available Functional Magnetic Resonance Imaging (fMRI is one of the most widely used tools to study the neural underpinnings of human cognition. Standard analysis of fMRI data relies on a General Linear Model (GLM approach to separate stimulus induced signals from noise. Crucially, this approach relies on a number of assumptions about the data which, for inferences to be valid, must be met. The current paper reviews the GLM approach to analysis of fMRI time-series, focusing in particular on the degree to which such data abides by the assumptions of the GLM framework, and on the methods that have been developed to correct for any violation of those assumptions. Rather than biasing estimates of effect size, the major consequence of non-conformity to the assumptions is to introduce bias into estimates of the variance, thus affecting test statistics, power and false positive rates. Furthermore, this bias can have pervasive effects on both individual subject and group-level statistics, potentially yielding qualitatively different results across replications, especially after the thresholding procedures commonly used for inference-making.

  14. Sensitivity of C-Band Polarimetric Radar-Based Drop Size Distribution Measurements to Maximum Diameter Assumptions

    Science.gov (United States)

    Carey, Lawrence D.; Petersen, Walter A.

    2011-01-01

    Mission (GPM/PMM Science Team)-funded study is to document the sensitivity of DSD measurements, including estimates of D0, from C-band Z(sub dr) and reflectivity to this range of D(sub max) assumptions. For this study, GPM Ground Validation 2DVD's were operated under the scanning domain of the UAHuntsville ARMOR C-band dual-polarimetric radar. Approximately 7500 minutes of DSD data were collected and processed to create gamma size distribution parameters using a truncated method of moments approach. After creating the gamma parameter datasets the DSD's were then used as input to a T-matrix model for computation of polarimetric radar moments at C-band. All necessary model parameterizations, such as temperature, drop shape, and drop fall mode, were fixed at typically accepted values while the D(sub max) assumption was allowed to vary in sensitivity tests. By hypothesizing a DSD model with D(sub max) (fit) from which the empirical fit to D0 = F[Z(sub dr)] was derived via non-linear least squares regression and a separate reference DSD model with D(sub max) (truth), bias and standard error in D0 retrievals were estimated in the presence of Z(sub dr) measurement error and hypothesized mismatch in D(sub max) assumptions. Although the normalized standard error for D0 = F[Z(sub dr)r] can increase slightly (as much as from 11% to 16% for all 7500 DSDs) when the D(sub max) (fit) does not match D(sub max) (truth), the primary impact of uncertainty in D(sub max) is a potential increase in normalized bias error in D0 (from 0% to as much as 10% over all 7500 DSDs, depending on the extent of the mismatch between D(sub max) (fit) and D(sub max) (truth)). For DSDs characterized by large Z(sub dr) (Z(sub dr) > 1.5 to 2.0 dB), the normalized bias error for D0 estimation at C-band is sometimes unacceptably large (> 10%), again depending on the extent of the hypothesized D(sub max) mismatch. Modeled errors in D0 retrievals from Z(sub dr) at C-band are demonstrated in detail and comparedo

  15. A Bayesian nonparametric approach to causal inference on quantiles.

    Science.gov (United States)

    Xu, Dandan; Daniels, Michael J; Winterstein, Almut G

    2018-02-25

    We propose a Bayesian nonparametric approach (BNP) for causal inference on quantiles in the presence of many confounders. In particular, we define relevant causal quantities and specify BNP models to avoid bias from restrictive parametric assumptions. We first use Bayesian additive regression trees (BART) to model the propensity score and then construct the distribution of potential outcomes given the propensity score using a Dirichlet process mixture (DPM) of normals model. We thoroughly evaluate the operating characteristics of our approach and compare it to Bayesian and frequentist competitors. We use our approach to answer an important clinical question involving acute kidney injury using electronic health records. © 2018, The International Biometric Society.

  16. Idaho National Engineering Laboratory installation roadmap assumptions document

    International Nuclear Information System (INIS)

    1993-05-01

    This document is a composite of roadmap assumptions developed for the Idaho National Engineering Laboratory (INEL) by the US Department of Energy Idaho Field Office and subcontractor personnel as a key element in the implementation of the Roadmap Methodology for the INEL Site. The development and identification of these assumptions in an important factor in planning basis development and establishes the planning baseline for all subsequent roadmap analysis at the INEL

  17. CHILDREN'S EDUCATION IN THE REGULAR NATIONAL BASIS: ASSUMPTIONS AND INTERFACES WITH PHYSICAL EDUCATION

    Directory of Open Access Journals (Sweden)

    André da Silva Mello

    2016-09-01

    Full Text Available This paper aims at discussing the Children's Education organization within the Regular Curricular National Basis (BNCC, focusing on the permanencies and advances taking in relation to the precedent documents, and analyzing the presence of Physical Education in Children's Education from the assumptions that guide the Base, in interface with researches about pedagogical experiences with this field of knowledge. To do so, it carries out a documental-bibliographic analysis, using as sources the BNCC, the National Curricular Referential for Children's Education, the National Curricular Guidelines for Children's Education and academic-scientific productions belonging to the Physical Education area that approach Children's Education. In the analysis process, the work establishes categories which allow the interlocution among different sources used in this study. Data analyzed offers indications that the assumption present in the BNCC dialogue, not explicitly, with the movements of the curricular component and with the Physical Education academic-scientific production regarding Children's Education.

  18. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  19. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-12-15

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  20. Deep Borehole Field Test Requirements and Controlled Assumptions.

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  1. Questioning the "big assumptions". Part I: addressing personal contradictions that impede professional development.

    Science.gov (United States)

    Bowe, Constance M; Lahey, Lisa; Armstrong, Elizabeth; Kegan, Robert

    2003-08-01

    The ultimate success of recent medical curriculum reforms is, in large part, dependent upon the faculty's ability to adopt and sustain new attitudes and behaviors. However, like many New Year's resolutions, sincere intent to change may be short lived and followed by a discouraging return to old behaviors. Failure to sustain the initial resolve to change can be misinterpreted as a lack of commitment to one's original goals and eventually lead to greater effort expended in rationalizing the status quo rather than changing it. The present article outlines how a transformative process that has proven to be effective in managing personal change, Questioning the Big Assumptions, was successfully used in an international faculty development program for medical educators to enhance individual personal satisfaction and professional effectiveness. This process systematically encouraged participants to explore and proactively address currently operative mechanisms that could stall their attempts to change at the professional level. The applications of the Big Assumptions process in faculty development helped individuals to recognize and subsequently utilize unchallenged and deep rooted personal beliefs to overcome unconscious resistance to change. This approach systematically led participants away from circular griping about what was not right in their current situation to identifying the actions that they needed to take to realize their individual goals. By thoughtful testing of personal Big Assumptions, participants designed behavioral changes that could be broadly supported and, most importantly, sustained.

  2. School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey

    Science.gov (United States)

    Sabanci, Ali

    2008-01-01

    This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…

  3. Major Assumptions of Mastery Learning.

    Science.gov (United States)

    Anderson, Lorin W.

    Mastery learning can be described as a set of group-based, individualized, teaching and learning strategies based on the premise that virtually all students can and will, in time, learn what the school has to teach. Inherent in this description are assumptions concerning the nature of schools, classroom instruction, and learners. According to the…

  4. Inference of reactive transport model parameters using a Bayesian multivariate approach

    Science.gov (United States)

    Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick

    2014-08-01

    Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.

  5. Plant ecosystem responses to rising atmospheric CO2: applying a "two-timing" approach to assess alternative hypotheses for mechanisms of nutrient limitation

    Science.gov (United States)

    Medlyn, B.; Jiang, M.; Zaehle, S.

    2017-12-01

    There is now ample experimental evidence that the response of terrestrial vegetation to rising atmospheric CO2 concentration is modified by soil nutrient availability. How to represent nutrient cycling processes is thus a key consideration for vegetation models. We have previously used model intercomparison to demonstrate that models incorporating different assumptions predict very different responses at Free-Air CO2 Enrichment experiments. Careful examination of model outputs has provided some insight into the reasons for the different model outcomes, but it is difficult to attribute outcomes to specific assumptions. Here we investigate the impact of individual assumptions in a generic plant carbon-nutrient cycling model. The G'DAY (Generic Decomposition And Yield) model is modified to incorporate alternative hypotheses for nutrient cycling. We analyse the impact of these assumptions in the model using a simple analytical approach known as "two-timing". This analysis identifies the quasi-equilibrium behaviour of the model at the time scales of the component pools. The analysis provides a useful mathematical framework for probing model behaviour and identifying the most critical assumptions for experimental study.

  6. Moving carbonation fronts in concrete: a moving-sharp-interface approach

    NARCIS (Netherlands)

    Muntean, A.; Böhm, M.; Kropp, J.

    2011-01-01

    We present a new modeling strategy for predicting the penetration of carbonation reaction fronts in concrete. The approach relies on the assumption that carbonation reaction concentrates macroscopically on an a priori unknown narrow strip (called reaction front) moving into concrete gradually

  7. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1) The...

  8. Model selection approach suggests causal association between 25-hydroxyvitamin D and colorectal cancer.

    Directory of Open Access Journals (Sweden)

    Lina Zgaga

    Full Text Available Vitamin D deficiency has been associated with increased risk of colorectal cancer (CRC, but causal relationship has not yet been confirmed. We investigate the direction of causation between vitamin D and CRC by extending the conventional approaches to allow pleiotropic relationships and by explicitly modelling unmeasured confounders.Plasma 25-hydroxyvitamin D (25-OHD, genetic variants associated with 25-OHD and CRC, and other relevant information was available for 2645 individuals (1057 CRC cases and 1588 controls and included in the model. We investigate whether 25-OHD is likely to be causally associated with CRC, or vice versa, by selecting the best modelling hypothesis according to Bayesian predictive scores. We examine consistency for a range of prior assumptions.Model comparison showed preference for the causal association between low 25-OHD and CRC over the reverse causal hypothesis. This was confirmed for posterior mean deviances obtained for both models (11.5 natural log units in favour of the causal model, and also for deviance information criteria (DIC computed for a range of prior distributions. Overall, models ignoring hidden confounding or pleiotropy had significantly poorer DIC scores.Results suggest causal association between 25-OHD and colorectal cancer, and support the need for randomised clinical trials for further confirmations.

  9. Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.

    Science.gov (United States)

    Pink, Joshua; Parker, Ben; Petrou, Stavros

    2016-09-01

    A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters

  10. Are waves of relational assumptions eroding traditional analysis?

    Science.gov (United States)

    Meredith-Owen, William

    2013-11-01

    The author designates as 'traditional' those elements of psychoanalytic presumption and practice that have, in the wake of Fordham's legacy, helped to inform analytical psychology and expand our capacity to integrate the shadow. It is argued that this element of the broad spectrum of Jungian practice is in danger of erosion by the underlying assumptions of the relational approach, which is fast becoming the new establishment. If the maps of the traditional landscape of symbolic reference (primal scene, Oedipus et al.) are disregarded, analysts are left with only their own self-appointed authority with which to orientate themselves. This self-centric epistemological basis of the relationalists leads to a revision of 'analytic attitude' that may be therapeutic but is not essentially analytic. This theme is linked to the perennial challenge of balancing differentiation and merger and traced back, through Chasseguet-Smirgel, to its roots in Genesis. An endeavour is made to illustrate this within the Journal convention of clinically based discussion through a commentary on Colman's (2013) avowedly relational treatment of the case material presented in his recent Journal paper 'Reflections on knowledge and experience' and through an assessment of Jessica Benjamin's (2004) relational critique of Ron Britton's (1989) transference embodied approach. © 2013, The Society of Analytical Psychology.

  11. Estimating Risks and Relative Risks in Case-Base Studies under the Assumptions of Gene-Environment Independence and Hardy-Weinberg Equilibrium

    Science.gov (United States)

    Chui, Tina Tsz-Ting; Lee, Wen-Chung

    2014-01-01

    Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption. PMID:25137392

  12. Estimating risks and relative risks in case-base studies under the assumptions of gene-environment independence and Hardy-Weinberg equilibrium.

    Directory of Open Access Journals (Sweden)

    Tina Tsz-Ting Chui

    Full Text Available Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption.

  13. Hawaiian forest bird trends: using log-linear models to assess long-term trends is supported by model diagnostics and assumptions (reply to Freed and Cann 2013)

    Science.gov (United States)

    Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.

    2014-01-01

    Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.

  14. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  15. Model documentation Renewable Fuels Module of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-01-01

    This report documents the objectives, analaytical approach and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1996 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described.

  16. Simulation modeling for stratified breast cancer screening - a systematic review of cost and quality of life assumptions.

    Science.gov (United States)

    Arnold, Matthias

    2017-12-02

    The economic evaluation of stratified breast cancer screening gains momentum, but produces also very diverse results. Systematic reviews so far focused on modeling techniques and epidemiologic assumptions. However, cost and utility parameters received only little attention. This systematic review assesses simulation models for stratified breast cancer screening based on their cost and utility parameters in each phase of breast cancer screening and care. A literature review was conducted to compare economic evaluations with simulation models of personalized breast cancer screening. Study quality was assessed using reporting guidelines. Cost and utility inputs were extracted, standardized and structured using a care delivery framework. Studies were then clustered according to their study aim and parameters were compared within the clusters. Eighteen studies were identified within three study clusters. Reporting quality was very diverse in all three clusters. Only two studies in cluster 1, four studies in cluster 2 and one study in cluster 3 scored high in the quality appraisal. In addition to the quality appraisal, this review assessed if the simulation models were consistent in integrating all relevant phases of care, if utility parameters were consistent and methodological sound and if cost were compatible and consistent in the actual parameters used for screening, diagnostic work up and treatment. Of 18 studies, only three studies did not show signs of potential bias. This systematic review shows that a closer look into the cost and utility parameter can help to identify potential bias. Future simulation models should focus on integrating all relevant phases of care, using methodologically sound utility parameters and avoiding inconsistent cost parameters.

  17. multi-scale data assimilation approaches and error characterisation applied to the inverse modelling of atmospheric constituent emission fields

    International Nuclear Information System (INIS)

    Koohkan, Mohammad Reza

    2012-01-01

    Data assimilation in geophysical sciences aims at optimally estimating the state of the system or some parameters of the system's physical model. To do so, data assimilation needs three types of information: observations and background information, a physical/numerical model, and some statistical description that prescribes uncertainties to each component of the system. In my dissertation, new methodologies of data assimilation are used in atmospheric chemistry and physics: the joint use of a 4D-Var with a sub-grid statistical model to consistently account for representativeness errors, accounting for multiple scale in the BLUE estimation principle, and a better estimation of prior errors using objective estimation of hyper-parameters. These three approaches will be specifically applied to inverse modelling problems focusing on the emission fields of tracers or pollutants. First, in order to estimate the emission inventories of carbon monoxide over France, in-situ stations which are impacted by the representativeness errors are used. A sub-grid model is introduced and coupled with a 4D-Var to reduce the representativeness error. Indeed, the results of inverse modelling showed that the 4D-Var routine was not fit to handle the representativeness issues. The coupled data assimilation system led to a much better representation of the CO concentration variability, with a significant improvement of statistical indicators, and more consistent estimation of the CO emission inventory. Second, the evaluation of the potential of the IMS (International Monitoring System) radionuclide network is performed for the inversion of an accidental source. In order to assess the performance of the global network, a multi-scale adaptive grid is optimised using a criterion based on degrees of freedom for the signal (DFS). The results show that several specific regions remain poorly observed by the IMS network. Finally, the inversion of the surface fluxes of Volatile Organic Compounds

  18. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  19. Modeling reactive transport processes in fractured rock using the time domain random walk approach within a dual-porosity framework

    Science.gov (United States)

    Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.

    2017-12-01

    Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.

  20. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  1. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam; Shi, Yuexiang; Gao, Xin

    2014-01-01

    of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue

  2. An Assessment of the Internal Rating Based Approach in Basel II

    OpenAIRE

    Simone Varotto

    2008-01-01

    The new bank capital regulation commonly known as Basel II includes a internal rating based approach (IRB) to measuring credit risk in bank portfolios. The IRB relies on the assumptions that the portfolio is fully diversified and that systematic risk is driven by one common factor. In this work we empirically investigate the impact of these assumptions by comparing the risk measures produced by the IRB with those of a more general credit risk model that allows for multiple systematic risk fac...

  3. Footprint-weighted tile approach for a spruce forest and a nearby patchy clearing using the ACASA model

    Science.gov (United States)

    Gatzsche, Kathrin; Babel, Wolfgang; Falge, Eva; Pyles, Rex David; Tha Paw U, Kyaw; Raabe, Armin; Foken, Thomas

    2018-05-01

    The ACASA (Advanced Canopy-Atmosphere-Soil Algorithm) model, with a higher-order closure for tall vegetation, has already been successfully tested and validated for homogeneous spruce forests. The aim of this paper is to test the model using a footprint-weighted tile approach for a clearing with a heterogeneous structure of the underlying surface. The comparison with flux data shows a good agreement with a footprint-aggregated tile approach of the model. However, the results of a comparison with a tile approach on the basis of the mean land use classification of the clearing is not significantly different. It is assumed that the footprint model is not accurate enough to separate small-scale heterogeneities. All measured fluxes are corrected by forcing the energy balance closure of the test data either by maintaining the measured Bowen ratio or by the attribution of the residual depending on the fractions of sensible and latent heat flux to the buoyancy flux. The comparison with the model, in which the energy balance is closed, shows that the buoyancy correction for Bowen ratios > 1.5 better fits the measured data. For lower Bowen ratios, the correction probably lies between the two methods, but the amount of available data was too small to make a conclusion. With an assumption of similarity between water and carbon dioxide fluxes, no correction of the net ecosystem exchange is necessary for Bowen ratios > 1.5.

  4. Poisson regression approach for modeling fatal injury rates amongst Malaysian workers

    International Nuclear Information System (INIS)

    Kamarulzaman Ibrahim; Heng Khai Theng

    2005-01-01

    Many safety studies are based on the analysis carried out on injury surveillance data. The injury surveillance data gathered for the analysis include information on number of employees at risk of injury in each of several strata where the strata are defined in terms of a series of important predictor variables. Further insight into the relationship between fatal injury rates and predictor variables may be obtained by the poisson regression approach. Poisson regression is widely used in analyzing count data. In this study, poisson regression is used to model the relationship between fatal injury rates and predictor variables which are year (1995-2002), gender, recording system and industry type. Data for the analysis were obtained from PERKESO and Jabatan Perangkaan Malaysia. It is found that the assumption that the data follow poisson distribution has been violated. After correction for the problem of over dispersion, the predictor variables that are found to be significant in the model are gender, system of recording, industry type, two interaction effects (interaction between recording system and industry type and between year and industry type). Introduction Regression analysis is one of the most popular

  5. Dynamics of screw dislocations : a generalised minimising-movements scheme approach

    NARCIS (Netherlands)

    Bonaschi, G.A.; Meurs, van P.J.P.; Morandotti, M.

    2015-01-01

    The gradient flow structure of the model introduced in [CG99] for the dynamics of screw dislocations is investigated by means of a generalised minimising-movements scheme approach. The assumption of a finite number of available glide directions, together with the "maximal dissipation criterion" that

  6. Comparison of Cox and Gray's survival models in severe sepsis

    DEFF Research Database (Denmark)

    Kasal, Jan; Andersen, Zorana Jovanovic; Clermont, Gilles

    2004-01-01

    Although survival is traditionally modeled using Cox proportional hazards modeling, this approach may be inappropriate in sepsis, in which the proportional hazards assumption does not hold. Newer, more flexible models, such as Gray's model, may be more appropriate....

  7. How do rigid-lid assumption affect LES simulation results at high Reynolds flows?

    Science.gov (United States)

    Khosronejad, Ali; Farhadzadeh, Ali; SBU Collaboration

    2017-11-01

    This research is motivated by the work of Kara et al., JHE, 2015. They employed LES to model flow around a model of abutment at a Re number of 27,000. They showed that first-order turbulence characteristics obtained by rigid-lid (RL) assumption compares fairly well with those of level-set (LS) method. Concerning the second-order statistics, however, their simulation results showed a significant dependence on the method used to describe the free surface. This finding can have important implications for open channel flow modeling. The Reynolds number for typical open channel flows, however, could be much larger than that of Kara et al.'s test case. Herein, we replicate the reported study by augmenting the geometric and hydraulic scales to reach a Re number of one order of magnitude larger ( 200,000). The Virtual Flow Simulator (VFS-Geophysics) model in its LES mode is used to simulate the test case using both RL and LS methods. The computational results are validated using measured flow and free-surface data from our laboratory experiments. Our goal is to investigate the effects of RL assumption on both first-order and second order statistics at high Reynolds numbers that occur in natural waterways. Acknowledgment: Computational resources are provided by the Center of Excellence in Wireless & Information Technology (CEWIT) of Stony Brook University.

  8. Bayesian nonparametric generative models for causal inference with missing at random covariates.

    Science.gov (United States)

    Roy, Jason; Lum, Kirsten J; Zeldow, Bret; Dworkin, Jordan D; Re, Vincent Lo; Daniels, Michael J

    2018-03-26

    We propose a general Bayesian nonparametric (BNP) approach to causal inference in the point treatment setting. The joint distribution of the observed data (outcome, treatment, and confounders) is modeled using an enriched Dirichlet process. The combination of the observed data model and causal assumptions allows us to identify any type of causal effect-differences, ratios, or quantile effects, either marginally or for subpopulations of interest. The proposed BNP model is well-suited for causal inference problems, as it does not require parametric assumptions about the distribution of confounders and naturally leads to a computationally efficient Gibbs sampling algorithm. By flexibly modeling the joint distribution, we are also able to impute (via data augmentation) values for missing covariates within the algorithm under an assumption of ignorable missingness, obviating the need to create separate imputed data sets. This approach for imputing the missing covariates has the additional advantage of guaranteeing congeniality between the imputation model and the analysis model, and because we use a BNP approach, parametric models are avoided for imputation. The performance of the method is assessed using simulation studies. The method is applied to data from a cohort study of human immunodeficiency virus/hepatitis C virus co-infected patients. © 2018, The International Biometric Society.

  9. Testing Mean Differences among Groups: Multivariate and Repeated Measures Analysis with Minimal Assumptions.

    Science.gov (United States)

    Bathke, Arne C; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne

    2018-03-22

    To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer's disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved.

  10. Emerging Assumptions About Organization Design, Knowledge And Action

    Directory of Open Access Journals (Sweden)

    Alan Meyer

    2013-12-01

    Full Text Available Participants in the Organizational Design Community’s 2013 Annual Conference faced the challenge of “making organization design knowledge actionable.”  This essay summarizes the opinions and insights participants shared during the conference.  I reflect on these ideas, connect them to recent scholarly thinking about organization design, and conclude that seeking to make design knowledge actionable is nudging the community away from an assumption set based upon linearity and equilibrium, and toward a new set of assumptions based on emergence, self-organization, and non-linearity.

  11. Matrix Diffusion for Performance Assessment - Experimental Evidence, Modelling Assumptions and Open Issues

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, A

    2004-07-01

    In this report a comprehensive overview on the matrix diffusion of solutes in fractured crystalline rocks is presented. Some examples from observations in crystalline bedrock are used to illustrate that matrix diffusion indeed acts on various length scales. Fickian diffusion is discussed in detail followed by some considerations on rock porosity. Due to the fact that the dual-porosity medium model is a very common and versatile method for describing solute transport in fractured porous media, the transport equations and the fundamental assumptions, approximations and simplifications are discussed in detail. There is a variety of geometrical aspects, processes and events which could influence matrix diffusion. The most important of these, such as, e.g., the effect of the flow-wetted fracture surface, channelling and the limited extent of the porous rock for matrix diffusion etc., are addressed. In a further section open issues and unresolved problems related to matrix diffusion are mentioned. Since matrix diffusion is one of the key retarding processes in geosphere transport of dissolved radionuclide species, matrix diffusion was consequently taken into account in past performance assessments of radioactive waste repositories in crystalline host rocks. Some issues regarding matrix diffusion are site-specific while others are independent of the specific situation of a planned repository for radioactive wastes. Eight different performance assessments from Finland, Sweden and Switzerland were considered with the aim of finding out how matrix diffusion was addressed, and whether a consistent picture emerges regarding the varying methodology of the different radioactive waste organisations. In the final section of the report some conclusions are drawn and an outlook is given. An extensive bibliography provides the reader with the key papers and reports related to matrix diffusion. (author)

  12. Simulating mesoscale coastal evolution for decadal coastal management: A new framework integrating multiple, complementary modelling approaches

    Science.gov (United States)

    van Maanen, Barend; Nicholls, Robert J.; French, Jon R.; Barkwith, Andrew; Bonaldo, Davide; Burningham, Helene; Brad Murray, A.; Payo, Andres; Sutherland, James; Thornhill, Gillian; Townend, Ian H.; van der Wegen, Mick; Walkden, Mike J. A.

    2016-03-01

    Coastal and shoreline management increasingly needs to consider morphological change occurring at decadal to centennial timescales, especially that related to climate change and sea-level rise. This requires the development of morphological models operating at a mesoscale, defined by time and length scales of the order 101 to 102 years and 101 to 102 km. So-called 'reduced complexity' models that represent critical processes at scales not much smaller than the primary scale of interest, and are regulated by capturing the critical feedbacks that govern landform behaviour, are proving effective as a means of exploring emergent coastal behaviour at a landscape scale. Such models tend to be computationally efficient and are thus easily applied within a probabilistic framework. At the same time, reductionist models, built upon a more detailed description of hydrodynamic and sediment transport processes, are capable of application at increasingly broad spatial and temporal scales. More qualitative modelling approaches are also emerging that can guide the development and deployment of quantitative models, and these can be supplemented by varied data-driven modelling approaches that can achieve new explanatory insights from observational datasets. Such disparate approaches have hitherto been pursued largely in isolation by mutually exclusive modelling communities. Brought together, they have the potential to facilitate a step change in our ability to simulate the evolution of coastal morphology at scales that are most relevant to managing erosion and flood risk. Here, we advocate and outline a new integrated modelling framework that deploys coupled mesoscale reduced complexity models, reductionist coastal area models, data-driven approaches, and qualitative conceptual models. Integration of these heterogeneous approaches gives rise to model compositions that can potentially resolve decadal- to centennial-scale behaviour of diverse coupled open coast, estuary and inner

  13. Topographic controls on shallow groundwater levels in a steep, prealpine catchment: When are the TWI assumptions valid?

    NARCIS (Netherlands)

    Rinderer, M.; van Meerveld, H.J.; Seibert, J.

    2014-01-01

    Topographic indices like the Topographic Wetness Index (TWI) have been used to predict spatial patterns of average groundwater levels and to model the dynamics of the saturated zone during events (e.g., TOPMODEL). However, the assumptions underlying the use of the TWI in hydrological models, of

  14. Breakdown of Hydrostatic Assumption in Tidal Channel with Scour Holes

    Directory of Open Access Journals (Sweden)

    Chunyan Li

    2016-10-01

    Full Text Available Hydrostatic condition is a common assumption in tidal and subtidal motions in oceans and estuaries.. Theories with this assumption have been largely successful. However, there is no definite criteria separating the hydrostatic from the non-hydrostatic regimes in real applications because real problems often times have multiple scales. With increased refinement of high resolution numerical models encompassing smaller and smaller spatial scales, the need for non-hydrostatic models is increasing. To evaluate the vertical motion over bathymetric changes in tidal channels and assess the validity of the hydrostatic approximation, we conducted observations using a vessel-based acoustic Doppler current profiler (ADCP. Observations were made along a straight channel 18 times over two scour holes of 25 m deep, separated by 330 m, in and out of an otherwise flat 8 m deep tidal pass leading to the Lake Pontchartrain over a time period of 8 hours covering part of the diurnal tidal cycle. Out of the 18 passages over the scour holes, 11 of them showed strong upwelling and downwelling which resulted in the breakdown of hydrostatic condition. The maximum observed vertical velocity was ~ 0.35 m/s, a high value in a tidal channel, and the estimated vertical acceleration reached a high value of 1.76×10-2 m/s2. Analysis demonstrated that the barotropic non-hydrostatic acceleration was dominant. The cause of the non-hydrostatic flow was the that over steep slopes. This demonstrates that in such a system, the bathymetric variation can lead to the breakdown of hydrostatic conditions. Models with hydrostatic restrictions will not be able to correctly capture the dynamics in such a system with significant bathymetric variations particularly during strong tidal currents.

  15. A Proposed Approach for Joint Modeling of the Longitudinal and Time-To-Event Data in Heterogeneous Populations: An Application to HIV/AIDS's Disease.

    Science.gov (United States)

    Roustaei, Narges; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf

    2018-01-01

    In recent years, the joint models have been widely used for modeling the longitudinal and time-to-event data simultaneously. In this study, we proposed an approach (PA) to study the longitudinal and survival outcomes simultaneously in heterogeneous populations. PA relaxes the assumption of conditional independence (CI). We also compared PA with joint latent class model (JLCM) and separate approach (SA) for various sample sizes (150, 300, and 600) and different association parameters (0, 0.2, and 0.5). The average bias of parameters estimation (AB-PE), average SE of parameters estimation (ASE-PE), and coverage probability of the 95% confidence interval (CP) among the three approaches were compared. In most cases, when the sample sizes increased, AB-PE and ASE-PE decreased for the three approaches, and CP got closer to the nominal level of 0.95. When there was a considerable association, PA in comparison with SA and JLCM performed better in the sense that PA had the smallest AB-PE and ASE-PE for the longitudinal submodel among the three approaches for the small and moderate sample sizes. Moreover, JLCM was desirable for the none-association and the large sample size. Finally, the evaluated approaches were applied on a real HIV/AIDS dataset for validation, and the results were compared.

  16. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    Science.gov (United States)

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  17. Critical Analysis of Underground Coal Gasification Models. Part II: Kinetic and Computational Fluid Dynamics Models

    Directory of Open Access Journals (Sweden)

    Alina Żogała

    2014-01-01

    Originality/value: This paper presents state of art in the field of coal gasification modeling using kinetic and computational fluid dynamics approach. The paper also presents own comparative analysis (concerned with mathematical formulation, input data and parameters, basic assumptions, obtained results etc. of the most important models of underground coal gasification.

  18. Spreading dynamics on complex networks: a general stochastic approach.

    Science.gov (United States)

    Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J

    2014-12-01

    Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.

  19. Critically Challenging Some Assumptions in HRD

    Science.gov (United States)

    O'Donnell, David; McGuire, David; Cross, Christine

    2006-01-01

    This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…

  20. The Arundel Assumption And Revision Of Some Large-Scale Maps ...

    African Journals Online (AJOL)

    The rather common practice of stating or using the Arundel Assumption without reference to appropriate mapping standards (except mention of its use for graphical plotting) is a major cause of inaccuracies in map revision. This paper describes an investigation to ascertain the applicability of the Assumption to the revision of ...

  1. Credit risk migration rates modeling as open systems: A micro-simulation approach

    Science.gov (United States)

    Landini, S.; Uberti, M.; Casellina, S.

    2018-05-01

    The last financial crisis of 2008 stimulated the development of new Regulatory Criteria (commonly known as Basel III) that pushed the banking activity to become more prudential, either in the short and the long run. As well known, in 2014 the International Accounting Standards Board (IASB) promulgated the new International Financial Reporting Standard 9 (IFRS 9) for financial instruments that will become effective in January 2018. Since the delayed recognition of credit losses on loans was identified as a weakness in existing accounting standards, the IASB has introduced an Expected Loss model that requires more timely recognition of credit losses. Specifically, new standards require entities to account both for expected losses from when the impairments are recognized for the first time and for full loan lifetime; moreover, a clear preference toward forward looking models is expressed. In this new framework, it is necessary a re-thinking of the widespread standard theoretical approach on which the well known prudential model is founded. The aim of this paper is then to define an original methodological approach to migration rates modeling for credit risk which is innovative respect to the standard method from the point of view of a bank as well as in a regulatory perspective. Accordingly, the proposed not-standard approach considers a portfolio as an open sample allowing for entries, migrations of stayers and exits as well. While being consistent with the empirical observations, this open-sample approach contrasts with the standard closed-sample method. In particular, this paper offers a methodology to integrate the outcomes of the standard closed-sample method within the open-sample perspective while removing some of the assumptions of the standard method. Three main conclusions can be drawn in terms of economic capital provision: (a) based on the Markovian hypothesis with a-priori absorbing state at default, the standard closed-sample method is to be abandoned

  2. Causal Mediation Analysis: Warning! Assumptions Ahead

    Science.gov (United States)

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  3. The European Water Framework Directive: How Ecological Assumptions Frame Technical and Social Change

    Directory of Open Access Journals (Sweden)

    Patrick Steyaert

    2007-06-01

    Full Text Available The European Water Framework Directive (WFD is built upon significant cognitive developments in the field of ecological science but also encourages active involvement of all interested parties in its implementation. The coexistence in the same policy text of both substantive and procedural approaches to policy development stimulated this research as did our concerns about the implications of substantive ecological visions within the WFD policy for promoting, or not, social learning processes through participatory designs. We have used a qualitative analysis of the WFD text which shows the ecological dimension of the WFD dedicates its quasi-exclusive attention to a particular current of thought in ecosystems science focusing on ecosystems status and stability and considering human activities as disturbance factors. This particular worldview is juxtaposed within the WFD with a more utilitarian one that gives rise to many policy exemptions without changing the general underlying ecological model. We discuss these policy statements in the light of the tension between substantive and procedural policy developments. We argue that the dominant substantive approach of the WFD, comprising particular ecological assumptions built upon "compositionalism," seems to be contradictory with its espoused intention of involving the public. We discuss that current of thought in regard to more functionalist thinking and adaptive management, which offers greater opportunities for social learning, i.e., place a set of interdependent stakeholders in an intersubjective position in which they operate a "social construction" of water problems through the co-production of knowledge.

  4. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  5. Biological ensemble modeling to evaluate potential futures of living marine resources

    DEFF Research Database (Denmark)

    Gårdmark, Anna; Lindegren, Martin; Neuenfeldt, Stefan

    2013-01-01

    ) as an example. The core of the approach is to expose an ensemble of models with different ecological assumptions to climate forcing, using multiple realizations of each climate scenario. We simulated the long-term response of cod to future fishing and climate change in seven ecological models ranging from...... model assumptions from the statistical uncertainty of future climate, and (3) identified results common for the whole model ensemble. Species interactions greatly influenced the simulated response of cod to fishing and climate, as well as the degree to which the statistical uncertainty of climate...... in all models, intense fishing prevented recovery, and climate change further decreased the cod population. Our study demonstrates how the biological ensemble modeling approach makes it possible to evaluate the relative importance of different sources of uncertainty in future species responses, as well...

  6. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  7. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    Directory of Open Access Journals (Sweden)

    Elżbieta Sandurska

    2016-12-01

    Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.

  8. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  9. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    Science.gov (United States)

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  10. Evolution of Requirements and Assumptions for Future Exploration Missions

    Science.gov (United States)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  11. Changing Assumptions and Progressive Change in Theories of Strategic Organization

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Hallberg, Niklas L.

    2017-01-01

    are often decoupled from the results of empirical testing, changes in assumptions seem closely intertwined with theoretical progress. Using the case of the resource-based view, we suggest that progressive change in theories of strategic organization may come about as a result of scholarly debate and dispute......A commonly held view is that strategic organization theories progress as a result of a Popperian process of bold conjectures and systematic refutations. However, our field also witnesses vibrant debates or disputes about the specific assumptions that our theories rely on, and although these debates...... over what constitutes proper assumptions—even in the absence of corroborating or falsifying empirical evidence. We also discuss how changing assumptions may drive future progress in the resource-based view....

  12. Markovian approach: From Ising model to stochastic radiative transfer

    International Nuclear Information System (INIS)

    Kassianov, E.; Veron, D.

    2009-01-01

    The origin of the Markovian approach can be traced back to 1906; however, it gained explicit recognition in the last few decades. This overview outlines some important applications of the Markovian approach, which illustrate its immense prestige, respect, and success. These applications include examples in the statistical physics, astronomy, mathematics, computational science and the stochastic transport problem. In particular, the overview highlights important contributions made by Pomraning and Titov to the neutron and radiation transport theory in a stochastic medium with homogeneous statistics. Using simple probabilistic assumptions (Markovian approximation), they have introduced a simplified, but quite realistic, representation of the neutron/radiation transfer through a two-component discrete stochastic mixture. New concepts and methodologies introduced by these two distinguished scientists allow us to generalize the Markovian treatment to the stochastic medium with inhomogeneous statistics and demonstrate its improved predictive performance for the down-welling shortwave fluxes. (authors)

  13. Modelling Commodity Demands and Labour Supply with m-Demands

    OpenAIRE

    Browning, Martin

    1999-01-01

    In the empirical modelling of demands and labour supply we often lack data on a full set of goods. The usual response is to invoke separability assumptions. Here we present an alternative based on modelling demands as a function of prices and the quantity of a reference good rather than total expenditure. We term such demands m-demands. The advantage of this approach is that we make maximum use of the data to hand without invoking implausible separability assumptions. In the theory section qu...

  14. Coupled sulfur isotopic and chemical mass transfer modeling: Approach and application to dynamic hydrothermal processes

    International Nuclear Information System (INIS)

    Janecky, D.R.

    1988-01-01

    A computational modeling code (EQPSreverse arrowS) has been developed to examine sulfur isotopic distribution pathways coupled with calculations of chemical mass transfer pathways. A post processor approach to EQ6 calculations was chosen so that a variety of isotopic pathways could be examined for each reaction pathway. Two types of major bounding conditions were implemented: (1) equilibrium isotopic exchange between sulfate and sulfide species or exchange only accompanying chemical reduction and oxidation events, and (2) existence or lack of isotopic exchange between solution species and precipitated minerals, parallel to the open and closed chemical system formulations of chemical mass transfer modeling codes. All of the chemical data necessary to explicitly calculate isotopic distribution pathways is generated by most mass transfer modeling codes and can be input to the EQPS code. Routines are built in to directly handle EQ6 tabular files. Chemical reaction models of seafloor hydrothermal vent processes and accompanying sulfur isotopic distribution pathways illustrate the capabilities of coupling EQPSreverse arrowS with EQ6 calculations, including the extent of differences that can exist due to the isotopic bounding condition assumptions described above. 11 refs., 2 figs

  15. Equivalent Circuit Modeling of a Rotary Piezoelectric Motor

    DEFF Research Database (Denmark)

    El, Ghouti N.; Helbo, Jan

    2000-01-01

    In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about the ...

  16. Investigating the Assumptions of Uses and Gratifications Research

    Science.gov (United States)

    Lometti, Guy E.; And Others

    1977-01-01

    Discusses a study designed to determine empirically the gratifications sought from communication channels and to test the assumption that individuals differentiate channels based on gratifications. (MH)

  17. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  18. The Avalanche Hypothesis and Compression of Morbidity: Testing Assumptions through Cohort-Sequential Analysis.

    Directory of Open Access Journals (Sweden)

    Jordan Silberman

    Full Text Available The compression of morbidity model posits a breakpoint in the adult lifespan that separates an initial period of relative health from a subsequent period of ever increasing morbidity. Researchers often assume that such a breakpoint exists; however, this assumption is hitherto untested.To test the assumption that a breakpoint exists--which we term a morbidity tipping point--separating a period of relative health from a subsequent deterioration in health status. An analogous tipping point for healthcare costs was also investigated.Four years of adults' (N = 55,550 morbidity and costs data were retrospectively analyzed. Data were collected in Pittsburgh, PA between 2006 and 2009; analyses were performed in Rochester, NY and Ann Arbor, MI in 2012 and 2013. Cohort-sequential and hockey stick regression models were used to characterize long-term trajectories and tipping points, respectively, for both morbidity and costs.Morbidity increased exponentially with age (P<.001. A morbidity tipping point was observed at age 45.5 (95% CI, 41.3-49.7. An exponential trajectory was also observed for costs (P<.001, with a costs tipping point occurring at age 39.5 (95% CI, 32.4-46.6. Following their respective tipping points, both morbidity and costs increased substantially (Ps<.001.Findings support the existence of a morbidity tipping point, confirming an important but untested assumption. This tipping point, however, may occur earlier in the lifespan than is widely assumed. An "avalanche of morbidity" occurred after the morbidity tipping point-an ever increasing rate of morbidity progression. For costs, an analogous tipping point and "avalanche" were observed. The time point at which costs began to increase substantially occurred approximately 6 years before health status began to deteriorate.

  19. Combining engineering and data-driven approaches

    DEFF Research Database (Denmark)

    Fischer, Katharina; De Sanctis, Gianluca; Kohler, Jochen

    2015-01-01

    Two general approaches may be followed for the development of a fire risk model: statistical models based on observed fire losses can support simple cost-benefit studies but are usually not detailed enough for engineering decision-making. Engineering models, on the other hand, require many assump...... to the calibration of a generic fire risk model for single family houses to Swiss insurance data. The example demonstrates that the bias in the risk estimation can be strongly reduced by model calibration.......Two general approaches may be followed for the development of a fire risk model: statistical models based on observed fire losses can support simple cost-benefit studies but are usually not detailed enough for engineering decision-making. Engineering models, on the other hand, require many...... assumptions that may result in a biased risk assessment. In two related papers we show how engineering and data-driven modelling can be combined by developing generic risk models that are calibrated to statistical data on observed fire events. The focus of the present paper is on the calibration procedure...

  20. The Emperors sham - wrong assumption that sham needling is sham.

    Science.gov (United States)

    Lundeberg, Thomas; Lund, Iréne; Näslund, Jan; Thomas, Moolamanil

    2008-12-01

    During the last five years a large number of randomised controlled clinical trials (RCTs) have been published on the efficacy of acupuncture in different conditions. In most of these studies verum is compared with sham acupuncture. In general both verum and sham have been found to be effective, and often with little reported difference in outcome. This has repeatedly led to the conclusion that acupuncture is no more effective than placebo treatment. However, this conclusion is based on the assumption that sham acupuncture is inert. Since sham acupuncture evidently is merely another form of acupuncture from the physiological perspective, the assumption that sham is sham is incorrect and conclusions based on this assumption are therefore invalid. Clinical guidelines based on such conclusions may therefore exclude suffering patients from valuable treatments.

  1. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. The homogeneous marginal utility of income assumption

    NARCIS (Netherlands)

    Demuynck, T.

    2015-01-01

    We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and

  3. Recognising the Effects of Costing Assumptions in Educational Business Simulation Games

    Science.gov (United States)

    Eckardt, Gordon; Selen, Willem; Wynder, Monte

    2015-01-01

    Business simulations are a powerful way to provide experiential learning that is focussed, controlled, and concentrated. Inherent in any simulation, however, are numerous assumptions that determine feedback, and hence the lessons learnt. In this conceptual paper we describe some common cost assumptions that are implicit in simulation design and…

  4. Two modelling approaches to water-quality simulation in a flooded iron-ore mine (Saizerais, Lorraine, France): a semi-distributed chemical reactor model and a physically based distributed reactive transport pipe network model.

    Science.gov (United States)

    Hamm, V; Collon-Drouaillet, P; Fabriol, R

    2008-02-19

    The flooding of abandoned mines in the Lorraine Iron Basin (LIB) over the past 25 years has degraded the quality of the groundwater tapped for drinking water. High concentrations of dissolved sulphate have made the water unsuitable for human consumption. This problematic issue has led to the development of numerical tools to support water-resource management in mining contexts. Here we examine two modelling approaches using different numerical tools that we tested on the Saizerais flooded iron-ore mine (Lorraine, France). A first approach considers the Saizerais Mine as a network of two chemical reactors (NCR). The second approach is based on a physically distributed pipe network model (PNM) built with EPANET 2 software. This approach considers the mine as a network of pipes defined by their geometric and chemical parameters. Each reactor in the NCR model includes a detailed chemical model built to simulate quality evolution in the flooded mine water. However, in order to obtain a robust PNM, we simplified the detailed chemical model into a specific sulphate dissolution-precipitation model that is included as sulphate source/sink in both a NCR model and a pipe network model. Both the NCR model and the PNM, based on different numerical techniques, give good post-calibration agreement between the simulated and measured sulphate concentrations in the drinking-water well and overflow drift. The NCR model incorporating the detailed chemical model is useful when a detailed chemical behaviour at the overflow is needed. The PNM incorporating the simplified sulphate dissolution-precipitation model provides better information of the physics controlling the effect of flow and low flow zones, and the time of solid sulphate removal whereas the NCR model will underestimate clean-up time due to the complete mixing assumption. In conclusion, the detailed NCR model will give a first assessment of chemical processes at overflow, and in a second time, the PNM model will provide more

  5. Examination of Conservatism in Ground-level Source Release Assumption when Performing Consequence Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung-yeop; Lim, Ho-Gon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    One of these assumptions frequently assumed is the assumption of ground-level source release. The user manual of a consequence analysis software HotSpot is mentioning like below: 'If you cannot estimate or calculate the effective release height, the actual physical release height (height of the stack) or zero for ground-level release should be used. This will usually yield a conservative estimate, (i.e., larger radiation doses for all downwind receptors, etc).' This recommendation could be agreed in aspect of conservatism but quantitative examination of the effect of this assumption to the result of consequence analysis is necessary. The source terms of Fukushima Dai-ichi NPP accident have been estimated by several studies using inverse modeling and one of the biggest sources of the difference between the results of these studies was different effective source release height assumed by each studies. It supports the importance of the quantitative examination of the influence by release height. Sensitivity analysis of the effective release height of radioactive sources was performed and the influence to the total effective dose was quantitatively examined in this study. Above 20% difference is maintained even at longer distances, when we compare the dose between the result assuming ground-level release and the results assuming other effective plume height. It means that we cannot ignore the influence of ground-level source assumption to the latent cancer fatality estimations. In addition, the assumption of ground-level release fundamentally prevents detailed analysis including diffusion of plume from effective plume height to the ground even though the influence of it is relatively lower in longer distance. When we additionally consider the influence of surface roughness, situations could be more serious. The ground level dose could be highly over-estimated in short downwind distance at the NPP sites which have low surface roughness such as Barakah site in

  6. Robust inference in summary data Mendelian randomization via the zero modal pleiotropy assumption.

    Science.gov (United States)

    Hartwig, Fernando Pires; Davey Smith, George; Bowden, Jack

    2017-12-01

    Mendelian randomization (MR) is being increasingly used to strengthen causal inference in observational studies. Availability of summary data of genetic associations for a variety of phenotypes from large genome-wide association studies (GWAS) allows straightforward application of MR using summary data methods, typically in a two-sample design. In addition to the conventional inverse variance weighting (IVW) method, recently developed summary data MR methods, such as the MR-Egger and weighted median approaches, allow a relaxation of the instrumental variable assumptions. Here, a new method - the mode-based estimate (MBE) - is proposed to obtain a single causal effect estimate from multiple genetic instruments. The MBE is consistent when the largest number of similar (identical in infinite samples) individual-instrument causal effect estimates comes from valid instruments, even if the majority of instruments are invalid. We evaluate the performance of the method in simulations designed to mimic the two-sample summary data setting, and demonstrate its use by investigating the causal effect of plasma lipid fractions and urate levels on coronary heart disease risk. The MBE presented less bias and lower type-I error rates than other methods under the null in many situations. Its power to detect a causal effect was smaller compared with the IVW and weighted median methods, but was larger than that of MR-Egger regression, with sample size requirements typically smaller than those available from GWAS consortia. The MBE relaxes the instrumental variable assumptions, and should be used in combination with other approaches in sensitivity analyses. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association

  7. Influence of road network and population demand assumptions in evacuation modeling for distant tsunamis

    Science.gov (United States)

    Henry, Kevin; Wood, Nathan J.; Frazier, Tim G.

    2017-01-01

    Tsunami evacuation planning in coastal communities is typically focused on local events where at-risk individuals must move on foot in a matter of minutes to safety. Less attention has been placed on distant tsunamis, where evacuations unfold over several hours, are often dominated by vehicle use and are managed by public safety officials. Traditional traffic simulation models focus on estimating clearance times but often overlook the influence of varying population demand, alternative modes, background traffic, shadow evacuation, and traffic management alternatives. These factors are especially important for island communities with limited egress options to safety. We use the coastal community of Balboa Island, California (USA), as a case study to explore the range of potential clearance times prior to wave arrival for a distant tsunami scenario. We use a first-in–first-out queuing simulation environment to estimate variations in clearance times, given varying assumptions of the evacuating population (demand) and the road network over which they evacuate (supply). Results suggest clearance times are less than wave arrival times for a distant tsunami, except when we assume maximum vehicle usage for residents, employees, and tourists for a weekend scenario. A two-lane bridge to the mainland was the primary traffic bottleneck, thereby minimizing the effect of departure times, shadow evacuations, background traffic, boat-based evacuations, and traffic light timing on overall community clearance time. Reducing vehicular demand generally reduced clearance time, whereas improvements to road capacity had mixed results. Finally, failure to recognize non-residential employee and tourist populations in the vehicle demand substantially underestimated clearance time.

  8. Estimating the global prevalence of inadequate zinc intake from national food balance sheets: effects of methodological assumptions.

    Directory of Open Access Journals (Sweden)

    K Ryan Wessells

    Full Text Available The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population's theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1 evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2 generate a model considered to provide the best estimates.National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation. Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12-66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57-0.99, P<0.01. A "best-estimate" model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%.Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country

  9. Determining Bounds on Assumption Errors in Operational Analysis

    Directory of Open Access Journals (Sweden)

    Neal M. Bengtson

    2014-01-01

    Full Text Available The technique of operational analysis (OA is used in the study of systems performance, mainly for estimating mean values of various measures of interest, such as, number of jobs at a device and response times. The basic principles of operational analysis allow errors in assumptions to be quantified over a time period. The assumptions which are used to derive the operational analysis relationships are studied. Using Karush-Kuhn-Tucker (KKT conditions bounds on error measures of these OA relationships are found. Examples of these bounds are used for representative performance measures to show limits on the difference between true performance values and those estimated by operational analysis relationships. A technique for finding tolerance limits on the bounds is demonstrated with a simulation example.

  10. Wind Power accuracy and forecast. D3.1. Assumptions on accuracy of wind power to be considered at short and long term horizons

    Energy Technology Data Exchange (ETDEWEB)

    Morthorst, P.E.; Coulondre, J.M.; Schroeder, S.T.; Meibom, P.

    2010-07-15

    The main objective of the Optimate project (An Open Platform to Test Integration in new MArkeT designs of massive intermittent Energy sources dispersed in several regional power markets) is to develop a new tool for testing these new market designs with large introduction of variable renewable energy sources. In Optimate a novel network/system/market modelling approach is being developed, generating an open simulation platform able to exhibit the comparative benefits of several market design options. This report constitutes delivery 3.1 on the assumptions on accuracy of wind power to be considered at short and long term horizons. The report handles the issues of state-of-the-art prediction, how predictions for wind power enter into the Optimate model and a simple and a more advanced methodology of how to generate trajectories of prediction errors to be used in Optimate. The main conclusion is that undoubtedly, the advanced approach is to be preferred to the simple one seen from a theoretical viewpoint. However, the advanced approach was developed to the Wilmar-model with the purpose of describing the integration of large-scale wind power in Europe. As the main purpose of the Optimate model is not to test the integration of wind power, but to test new market designs assuming a strong growth in wind power production, a more simplified approach for describing wind power forecasts should be sufficient. Thus a further development of the simple approach is suggested, eventually including correlations between geographical areas. In this report the general methodologies for generating trajectories for wind power forecasts are outlined. However, the methods are not yet implemented. In the next phase of Optimate, the clusters will be defined and the needed data collected. Following this phase actual results will be generated to be used in Optimate. (LN)

  11. Approach to uncertainty evaluation for safety analysis

    International Nuclear Information System (INIS)

    Ogura, Katsunori

    2005-01-01

    Nuclear power plant safety used to be verified and confirmed through accident simulations using computer codes generally because it is very difficult to perform integrated experiments or tests for the verification and validation of the plant safety due to radioactive consequence, cost, and scaling to the actual plant. Traditionally the plant safety had been secured owing to the sufficient safety margin through the conservative assumptions and models to be applied to those simulations. Meanwhile the best-estimate analysis based on the realistic assumptions and models in support of the accumulated insights could be performed recently, inducing the reduction of safety margin in the analysis results and the increase of necessity to evaluate the reliability or uncertainty of the analysis results. This paper introduces an approach to evaluate the uncertainty of accident simulation and its results. (Note: This research had been done not in the Japan Nuclear Energy Safety Organization but in the Tokyo Institute of Technology.) (author)

  12. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  13. Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi

    2016-01-01

    Recent advancements in set theory and readily available software have enabled social science researchers to bridge the variable-centered quantitative and case-based qualitative methodological paradigms in order to analyze multi-dimensional associations beyond the linearity assumptions, aggregate...... effects, unicausal reduction, and case specificity. Based on the developments in set theoretical thinking in social sciences and employing methods like Qualitative Comparative Analysis (QCA), Necessary Condition Analysis (NCA), and set visualization techniques, in this position paper, we propose...... and demonstrate a new approach to maturity models in the domain of Information Systems. This position paper describes the set-theoretical approach to maturity models, presents current results and outlines future research work....

  14. The assumption of heterogeneous or homogeneous radioactive contamination in soil/sediment: does it matter in terms of the external exposure of fauna?

    Science.gov (United States)

    Beaugelin-Seiller, K

    2014-12-01

    The classical approach to environmental radioprotection is based on the assumption of homogeneously contaminated media. However, in soils and sediments there may be a significant variation of radioactivity with depth. The effect of this heterogeneity was investigated by examining the external exposure of various sediment and soil organisms, and determining the resulting dose rates, assuming a realistic combination of locations and radionuclides. The results were dependent on the exposure situation, i.e., the organism, its location, and the quality and quantity of radionuclides. The dose rates ranged over three orders of magnitude. The assumption of homogeneous contamination was not consistently conservative (if associated with a level of radioactivity averaged over the full thickness of soil or sediment that was sampled). Dose assessment for screening purposes requires consideration of the highest activity concentration measured in a soil/sediment that is considered to be homogeneously contaminated. A more refined assessment (e.g., higher tier of a graded approach) should take into consideration a more realistic contamination profile, and apply different dosimetric approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Psychopatholgy, fundamental assumptions and CD-4 T lymphocyte ...

    African Journals Online (AJOL)

    In addition, we explored whether psychopathology and negative fundamental assumptions in ... Method: Self-rating questionnaires to assess depressive symptoms, ... associated with all participants scoring in the positive range of the FA scale.

  16. A Proposed Approach for Joint Modeling of the Longitudinal and Time-To-Event Data in Heterogeneous Populations: An Application to HIV/AIDS’s Disease

    Directory of Open Access Journals (Sweden)

    Narges Roustaei

    2018-01-01

    Full Text Available In recent years, the joint models have been widely used for modeling the longitudinal and time-to-event data simultaneously. In this study, we proposed an approach (PA to study the longitudinal and survival outcomes simultaneously in heterogeneous populations. PA relaxes the assumption of conditional independence (CI. We also compared PA with joint latent class model (JLCM and separate approach (SA for various sample sizes (150, 300, and 600 and different association parameters (0, 0.2, and 0.5. The average bias of parameters estimation (AB-PE, average SE of parameters estimation (ASE-PE, and coverage probability of the 95% confidence interval (CP among the three approaches were compared. In most cases, when the sample sizes increased, AB-PE and ASE-PE decreased for the three approaches, and CP got closer to the nominal level of 0.95. When there was a considerable association, PA in comparison with SA and JLCM performed better in the sense that PA had the smallest AB-PE and ASE-PE for the longitudinal submodel among the three approaches for the small and moderate sample sizes. Moreover, JLCM was desirable for the none-association and the large sample size. Finally, the evaluated approaches were applied on a real HIV/AIDS dataset for validation, and the results were compared.

  17. Modeling the evolution of natural cliffs subject to weathering. 1, Limit analysis approach

    OpenAIRE

    Utili, Stefano; Crosta, Giovanni B.

    2011-01-01

    Retrogressive landsliding evolution of natural slopes subjected to weathering has been modeled by assuming Mohr-Coulomb material behavior and by using an analytical method. The case of weathering-limited slope conditions, with complete erosion of the accumulated debris, has been modeled. The limit analysis upper-bound method is used to study slope instability induced by a homogeneous decrease of material strength in space and time. The only assumption required in the model concerns the degree...

  18. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    Science.gov (United States)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  19. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    Science.gov (United States)

    Baisden, W. T.; Canessa, S.

    2013-01-01

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  20. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    International Nuclear Information System (INIS)

    Baisden, W.T.; Canessa, S.

    2013-01-01

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14 C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14 C to determine residence times, by estimating the amount of ‘bomb 14 C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14 C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  1. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  2. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    Directory of Open Access Journals (Sweden)

    Hazim Adnan Hashim

    2016-09-01

    Full Text Available The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led many individuals to build new kind of beliefs and assumptions about themselves and the world. Many writers have written about the human ordeals that followed this incident. Don DeLillo’s Falling Man reflects the traumatic repercussions of this disaster on Americans’ fundamental assumptions. The objective of this study is to examine the novel from the traumatic perspective that has afflicted the victims’ fundamental understandings of the world and the self. Individuals’ fundamental understandings could be changed or modified due to exposure to certain types of events like war, terrorism, political violence or even the sense of alienation. The Assumptive World theory of Ronnie Janoff-Bulman will be used as a framework to study the traumatic experience of the characters in Falling Man. The significance of the study lies in providing a new perception to the field of trauma that can help trauma victims to adopt alternative assumptions or reshape their previous ones to heal from traumatic effects.

  3. Seemingly Unrelated Regression Approach for GSTARIMA Model to Forecast Rain Fall Data in Malang Southern Region Districts

    Directory of Open Access Journals (Sweden)

    Siti Choirun Nisak

    2016-06-01

    Full Text Available Time series forecasting models can be used to predict phenomena that occur in nature. Generalized Space Time Autoregressive (GSTAR is one of time series model used to forecast the data consisting the elements of time and space. This model is limited to the stationary and non-seasonal data. Generalized Space Time Autoregressive Integrated Moving Average (GSTARIMA is GSTAR development model that accommodates the non-stationary and seasonal data. Ordinary Least Squares (OLS is method used to estimate parameter of GSTARIMA model. Estimation parameter of GSTARIMA model using OLS will not produce efficiently estimator if there is an error correlation between spaces. Ordinary Least Square (OLS assumes the variance-covariance matrix has a constant error ~(, but in fact, the observatory spaces are correlated so that variance-covariance matrix of the error is not constant. Therefore, Seemingly Unrelated Regression (SUR approach is used to accommodate the weakness of the OLS. SUR assumption is ~(, for estimating parameters GSTARIMA model. The method to estimate parameter of SUR is Generalized Least Square (GLS. Applications GSTARIMA-SUR models for rainfall data in the region Malang obtained GSTARIMA models ((1(1,12,36,(0,(1-SUR with determination coefficient generated with the average of 57.726%.

  4. Extracurricular Business Planning Competitions: Challenging the Assumptions

    Science.gov (United States)

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  5. The Role of Policy Assumptions in Validating High-stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    L. Cronbach has made the point that for validity arguments to be convincing to diverse audiences, they need to be based on assumptions that are credible to these audiences. The interpretations and uses of high stakes test scores rely on a number of policy assumptions about what should be taught in schools, and more specifically, about the content…

  6. Critical Comments on the General Model of Instructional Communication

    Science.gov (United States)

    Walton, Justin D.

    2014-01-01

    This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…

  7. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...

  8. The Effect of Multicollinearity and the Violation of the Assumption of Normality on the Testing of Hypotheses in Regression Analysis.

    Science.gov (United States)

    Vasu, Ellen S.; Elmore, Patricia B.

    The effects of the violation of the assumption of normality coupled with the condition of multicollinearity upon the outcome of testing the hypothesis Beta equals zero in the two-predictor regression equation is investigated. A monte carlo approach was utilized in which three differenct distributions were sampled for two sample sizes over…

  9. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  10. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Directory of Open Access Journals (Sweden)

    Md Zobaer Hasan

    Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  11. Modeling approaches of competitive sorption and transport of trace metals and metalloids in soils: a review.

    Science.gov (United States)

    Selim, H M; Zhang, Hua

    2013-01-01

    Competition among various heavy metal species for available adsorption sites on soil matrix surfaces can enhance the mobility of contaminants in the soil environment. Accurate predictions of the fate and behavior of heavy metals in soils and geologic media requires the understanding of the underlying competitive-sorption and transport processes. In this review, we present equilibrium and kinetic models for competitive heavy metal sorption and transport in soils. Several examples are summarized to illustrate the impact of competing ions on the reactivities and mobility of heavy metals in the soil-water environment. We demonstrate that equilibrium Freundlich approaches can be extended to account for competitive sorption of cations and anions with the incorporation of competition coefficients associated with each reaction. Furthermore, retention models of the multiple-reaction type including the two-site nonlinear equilibrium-kinetic models and the concurrent- and consecutive-multireaction models were modified to describe commonly observed time-dependent behaviors of heavy metals in soils. We also show that equilibrium Langmuir and kinetic second-order models can be extended to simulate the competitive sorption and transport in soils, although the use of such models is limited due to their simplifying assumptions. A major drawback of the empirically based Freundlich and Langmuir approaches is that their associated parameters are specific for each soil. Alternatively, geochemical models that are based on ion-exchange and surface-complexation concepts are capable of quantifying the competitive behavior of several chemical species under a wide range of environmental conditions. Such geochemical models, however, are incapable of describing the time-dependent sorption behavior of heavy metal ions in competitive systems. Further research is needed to develop a general-purpose model based on physical and chemical mechanisms governing competitive sorption in soils. Copyright

  12. Uncertainty estimation of a complex water quality model: The influence of Box-Cox transformation on Bayesian approaches and comparison with a non-Bayesian method

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio

    In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the

  13. How to Handle Assumptions in Synthesis

    Directory of Open Access Journals (Sweden)

    Roderick Bloem

    2014-07-01

    Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

  14. A Fuzzy Modeling Approach for Replicated Response Measures Based on Fuzzification of Replications with Descriptive Statistics and Golden Ratio

    Directory of Open Access Journals (Sweden)

    Özlem TÜRKŞEN

    2018-03-01

    Full Text Available Some of the experimental designs can be composed of replicated response measures in which the replications cannot be identified exactly and may have uncertainty different than randomness. Then, the classical regression analysis may not be proper to model the designed data because of the violation of probabilistic modeling assumptions. In this case, fuzzy regression analysis can be used as a modeling tool. In this study, the replicated response values are newly formed to fuzzy numbers by using descriptive statistics of replications and golden ratio. The main aim of the study is obtaining the most suitable fuzzy model for replicated response measures through fuzzification of the replicated values by taking into account the data structure of the replications in statistical framework. Here, the response and unknown model coefficients are considered as triangular type-1 fuzzy numbers (TT1FNs whereas the inputs are crisp. Predicted fuzzy models are obtained according to the proposed fuzzification rules by using Fuzzy Least Squares (FLS approach. The performances of the predicted fuzzy models are compared by using Root Mean Squared Error (RMSE criteria. A data set from the literature, called wheel cover component data set, is used to illustrate the performance of the proposed approach and the obtained results are discussed. The calculation results show that the combined formulation of the descriptive statistics and the golden ratio is the most preferable fuzzification rule according to the well-known decision making method, called TOPSIS, for the data set.

  15. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  16. An approach to thermochemical modeling of nuclear waste glass

    International Nuclear Information System (INIS)

    Besmann, T.M.; Beahm, E.C.; Spear, K.E.

    1998-01-01

    This initial work is aimed at developing a basic understanding of the phase equilibria and solid solution behavior of the constituents of waste glass. Current, experimentally determined values are less than desirable since they depend on measurement of the leach rate under non-realistic conditions designed to accelerate processes that occur on a geologic time scale. The often-used assumption that the activity of a species is either unity or equal to the overall concentration of the metal can also yield misleading results. The associate species model, a recent development in thermochemical modeling, will be applied to these systems to more accurately predict chemical activities in such complex systems as waste glasses

  17. Reflecting on the challenges of choosing and using a grounded theory approach.

    Science.gov (United States)

    Markey, Kathleen; Tilki, Mary; Taylor, Georgina

    2014-11-01

    To explore three different approaches to grounded theory and consider some of the possible philosophical assumptions underpinning them. Grounded theory is a comprehensive yet complex methodology that offers a procedural structure that guides the researcher. However, divergent approaches to grounded theory present dilemmas for novice researchers seeking to choose a suitable research method. This is a methodology paper. This is a reflexive paper that explores some of the challenges experienced by a PhD student when choosing and operationalising a grounded theory approach. Before embarking on a study, novice grounded theory researchers should examine their research beliefs to assist them in selecting the most suitable approach. This requires an insight into the approaches' philosophical assumptions, such as those pertaining to ontology and epistemology. Researchers need to be clear about the philosophical assumptions underpinning their studies and the effects that different approaches will have on the research results. This paper presents a personal account of the journey of a novice grounded theory researcher who chose a grounded theory approach and worked within its theoretical parameters. Novice grounded theory researchers need to understand the different philosophical assumptions that influence the various grounded theory approaches, before choosing one particular approach.

  18. HEDR modeling approach

    International Nuclear Information System (INIS)

    Shipler, D.B.; Napier, B.A.

    1992-07-01

    This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered

  19. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  20. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  1. The effects of internal refractive index variation in near-infrared optical tomography: a finite element modelling approach

    International Nuclear Information System (INIS)

    Dehghani, Hamid; Brooksby, Ben; Vishwanath, Karthik; Pogue, Brian W; Paulsen, Keith D

    2003-01-01

    Near-infrared (NIR) tomography is a technique used to measure light propagation through tissue and generate images of internal optical property distributions from boundary measurements. Most popular applications have concentrated on female breast imaging, neonatal and adult head imaging, as well as muscle and small animal studies. In most instances a highly scattering medium with a homogeneous refractive index is assumed throughout the imaging domain. Using these assumptions, it is possible to simplify the model to the diffusion approximation. However, biological tissue contains regions of varying optical absorption and scatter, as well as varying refractive index. In this work, we introduce an internal boundary constraint in the finite element method approach to modelling light propagation through tissue that accounts for regions of different refractive indices. We have compared the results to data from a Monte Carlo simulation and show that for a simple two-layered slab model of varying refractive index, the phase of the measured reflectance data is significantly altered by the variation in internal refractive index, whereas the amplitude data are affected only slightly

  2. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  3. Robust Estimation of Value-at-Risk through Distribution-Free and Parametric Approaches Using the Joint Severity and Frequency Model: Applications in Financial, Actuarial, and Natural Calamities Domains

    Directory of Open Access Journals (Sweden)

    Sabyasachi Guharay

    2017-07-01

    Full Text Available Value-at-Risk (VaR is a well-accepted risk metric in modern quantitative risk management (QRM. The classical Monte Carlo simulation (MCS approach, denoted henceforth as the classical approach, assumes the independence of loss severity and loss frequency. In practice, this assumption does not always hold true. Through mathematical analyses, we show that the classical approach is prone to significant biases when the independence assumption is violated. This is also corroborated by studying both simulated and real-world datasets. To overcome the limitations and to more accurately estimate VaR, we develop and implement the following two approaches for VaR estimation: the data-driven partitioning of frequency and severity (DPFS using clustering analysis, and copula-based parametric modeling of frequency and severity (CPFS. These two approaches are verified using simulation experiments on synthetic data and validated on five publicly available datasets from diverse domains; namely, the financial indices data of Standard & Poor’s 500 and the Dow Jones industrial average, chemical loss spills as tracked by the US Coast Guard, Australian automobile accidents, and US hurricane losses. The classical approach estimates VaR inaccurately for 80% of the simulated data sets and for 60% of the real-world data sets studied in this work. Both the DPFS and the CPFS methodologies attain VaR estimates within 99% bootstrap confidence interval bounds for both simulated and real-world data. We provide a process flowchart for risk practitioners describing the steps for using the DPFS versus the CPFS methodology for VaR estimation in real-world loss datasets.

  4. False assumptions.

    Science.gov (United States)

    Swaminathan, M

    1997-01-01

    Indian women do not have to be told the benefits of breast feeding or "rescued from the clutches of wicked multinational companies" by international agencies. There is no proof that breast feeding has declined in India; in fact, a 1987 survey revealed that 98% of Indian women breast feed. Efforts to promote breast feeding among the middle classes rely on such initiatives as the "baby friendly" hospital where breast feeding is promoted immediately after birth. This ignores the 76% of Indian women who give birth at home. Blaming this unproved decline in breast feeding on multinational companies distracts attention from more far-reaching and intractable effects of social change. While the Infant Milk Substitutes Act is helpful, it also deflects attention from more pressing issues. Another false assumption is that Indian women are abandoning breast feeding to comply with the demands of employment, but research indicates that most women give up employment for breast feeding, despite the economic cost to their families. Women also seek work in the informal sector to secure the flexibility to meet their child care responsibilities. Instead of being concerned about "teaching" women what they already know about the benefits of breast feeding, efforts should be made to remove the constraints women face as a result of their multiple roles and to empower them with the support of families, governmental policies and legislation, employers, health professionals, and the media.

  5. Are implicit policy assumptions about climate adaptation trying to push drinking water utilities down an impossible path?

    Science.gov (United States)

    Klasic, M. R.; Ekstrom, J.; Bedsworth, L. W.; Baker, Z.

    2017-12-01

    Extreme events such as wildfires, droughts, and flooding are projected to be more frequent and intense under a changing climate, increasing challenges to water quality management. To protect and improve public health, drinking water utility managers need to understand and plan for climate change and extreme events. This three year study began with the assumption that improved climate projections were key to advancing climate adaptation at the local level. Through a survey (N = 259) and interviews (N = 61) with California drinking water utility managers during the peak of the state's recent drought, we found that scientific information was not a key barrier hindering adaptation. Instead, we found that managers fell into three distinct mental models based on their interaction with, perceptions, and attitudes, towards scientific information and the future of water in their system. One of the mental models, "modeled futures", is a concept most in line with how climate change scientists talk about the use of information. Drinking water utilities falling into the "modeled future" category tend to be larger systems that have adequate capacity to both receive and use scientific information. Medium and smaller utilities in California, that more often serve rural low income communities, tend to fall into the other two mental models, "whose future" and "no future". We show evidence that there is an implicit presumption that all drinking water utility managers should strive to align with "modeled future" mental models. This presentation questions this assumption as it leaves behind many utilities that need to adapt to climate change (several thousand in California alone), but may not have the technical, financial, managerial, or other capacity to do so. It is clear that no single solution or pathway to drought resilience exists for water utilities, but we argue that a more explicit understanding and definition of what it means to be a resilient drinking water utility is

  6. 7 CFR 1980.476 - Transfer and assumptions.

    Science.gov (United States)

    2010-01-01

    ...-354 449-30 to recover its pro rata share of the actual loss at that time. In completing Form FmHA or... the lender on liquidations and property management. A. The State Director may approve all transfer and... Director will notify the Finance Office of all approved transfer and assumption cases on Form FmHA or its...

  7. Model documentation report: Transportation sector model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity in model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.

  8. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  9. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  10. Complexity-aware simple modeling.

    Science.gov (United States)

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Revisiting the time until fixation of a neutral mutant in a finite population - A coalescent theory approach.

    Science.gov (United States)

    Greenbaum, Gili

    2015-09-07

    Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    Science.gov (United States)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  13. Operational gaming an international approach

    CERN Document Server

    Ståhl, Ingolf

    1983-01-01

    Operational Gaming: An International Approach focuses on various research on this method of systems analysis. The text points out the value of this method in decision making, planning, and in the implementation of policies. The book presents a survey that highlights the connection of experimental gaming, game theory, and operational gaming. The value of gaming as a balancing method in assessing multifaceted computer models, most notably about their assumptions on human behavior, is noted. The book also offers an overview of gaming in other countries, such as Bulgaria, Soviet Union, and Japan,

  14. Supplier's optimal bidding strategy in electricity pay-as-bid auction: Comparison of the Q-learning and a model-based approach

    International Nuclear Information System (INIS)

    Rahimiyan, Morteza; Rajabi Mashhadi, Habib

    2008-01-01

    In this paper, the bidding decision making problem in electricity pay-as-bid auction is studied from a supplier's point of view. The bidding problem is a complicated task, because of suppliers' uncertain behaviors and demand fluctuation. In a specific case, in which, the market clearing price (MCP) is considered as a continuous random variable with a known probability distribution function (PDF), an analytic solution is proposed. The suggested solution is generalized to consider the effect of supplier market power due to transmission congestion. As a result, an algebraic equation is developed to compute optimal offering price. The basic assumption in this approach is to take the known probabilistic model for the MCP. The above-mentioned method, called model-based approach, is not more applicable in a realistic situation. In order to overcome the drawback of this method, which needs information about the MCP and its PDF, the supplier learns from past experiences using the Q-learning algorithm to find out the optimal bid price. The simulation results of the model-based and Q-learning methods are compared on a studied system. It is shown that a supplier using the Q-learning algorithm is able to find the optimal bidding strategy similar to one obtained by the model-based approach. Furthermore, to analyze a more realistic situation, the suppliers' behaviors are modeled using a multi-agent system. Simulation results illustrate that the studied supplier finds the optimal bidding strategy in power market using the Q-learning algorithm. (author)

  15. Factor structure and concurrent validity of the world assumptions scale.

    Science.gov (United States)

    Elklit, Ask; Shevlin, Mark; Solomon, Zahava; Dekel, Rachel

    2007-06-01

    The factor structure of the World Assumptions Scale (WAS) was assessed by means of confirmatory factor analysis. The sample was comprised of 1,710 participants who had been exposed to trauma that resulted in whiplash. Four alternative models were specified and estimated using LISREL 8.72. A correlated 8-factor solution was the best explanation of the sample data. The estimates of reliability of eight subscales of the WAS ranged from .48 to .82. Scores from five subscales correlated significantly with trauma severity as measured by the Harvard Trauma Questionnaire, although the magnitude of the correlations was low to modest, ranging from .08 to -.43. It is suggested that the WAS has adequate psychometric properties for use in both clinical and research settings.

  16. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited

  17. An Extension to Deng's Entropy in the Open World Assumption with an Application in Sensor Data Fusion.

    Science.gov (United States)

    Tang, Yongchuan; Zhou, Deyun; Chan, Felix T S

    2018-06-11

    Quantification of uncertain degree in the Dempster-Shafer evidence theory (DST) framework with belief entropy is still an open issue, even a blank field for the open world assumption. Currently, the existed uncertainty measures in the DST framework are limited to the closed world where the frame of discernment (FOD) is assumed to be complete. To address this issue, this paper focuses on extending a belief entropy to the open world by considering the uncertain information represented as the FOD and the nonzero mass function of the empty set simultaneously. An extension to Deng’s entropy in the open world assumption (EDEOW) is proposed as a generalization of the Deng’s entropy and it can be degenerated to the Deng entropy in the closed world wherever necessary. In order to test the reasonability and effectiveness of the extended belief entropy, an EDEOW-based information fusion approach is proposed and applied to sensor data fusion under uncertainty circumstance. The experimental results verify the usefulness and applicability of the extended measure as well as the modified sensor data fusion method. In addition, a few open issues still exist in the current work: the necessary properties for a belief entropy in the open world assumption, whether there exists a belief entropy that satisfies all the existed properties, and what is the most proper fusion frame for sensor data fusion under uncertainty.

  18. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepúlveda, Nuno

    2013-02-26

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  19. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    Science.gov (United States)

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  20. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepú lveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-01-01

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  1. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  2. Model documentation Coal Market Module of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-04-30

    This report documents objectives and conceptual and methodological approach used in the development of the National Energy Modeling System (NEMS) Coal Market Module (CMM) used to develop the Annual Energy Outlook 1996 (AEO96). This report catalogues and describes the assumptions, methodology, estimation techniques, and source code of CMM`s three submodules: Coal Production Submodule, Coal Export Submodule, and Coal Distribution Submodule.

  3. Expressing Environment Assumptions and Real-time Requirements for a Distributed Embedded System with Shared Variables

    DEFF Research Database (Denmark)

    Tjell, Simon; Fernandes, João Miguel

    2008-01-01

    In a distributed embedded system, it is often necessary to share variables among its computing nodes to allow the distribution of control algorithms. It is therefore necessary to include a component in each node that provides the service of variable sharing. For that type of component, this paper...... for the component. The CPN model can be used to validate the environment assumptions and the requirements. The validation is performed by execution of the model during which traces of events and states are automatically generated and evaluated against the requirements....

  4. Partitioning uncertainty in streamflow projections under nonstationary model conditions

    Science.gov (United States)

    Chawla, Ila; Mujumdar, P. P.

    2018-02-01

    Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them

  5. Symbolic regression via genetic programming for data driven derivation of confinement scaling laws without any assumption on their mathematical form

    International Nuclear Information System (INIS)

    Murari, A; Peluso, E; Gelfusa, M; Lupelli, I; Lungaroni, M; Gaudio, P

    2015-01-01

    Many measurements are required to control thermonuclear plasmas and to fully exploit them scientifically. In the last years JET has shown the potential to generate about 50 GB of data per shot. These amounts of data require more sophisticated data analysis methodologies to perform correct inference and various techniques have been recently developed in this respect. The present paper covers a new methodology to extract mathematical models directly from the data without any a priori assumption about their expression. The approach, based on symbolic regression via genetic programming, is exemplified using the data of the International Tokamak Physics Activity database for the energy confinement time. The best obtained scaling laws are not in power law form and suggest a revisiting of the extrapolation to ITER. Indeed the best non-power law scalings predict confinement times in ITER approximately between 2 and 3 s. On the other hand, more comprehensive and better databases are required to fully profit from the power of these new methods and to discriminate between the hundreds of thousands of models that they can generate. (paper)

  6. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    Energy Technology Data Exchange (ETDEWEB)

    Baisden, W.T., E-mail: t.baisden@gns.cri.nz [National Isotope Centre, GNS Science, P.O. Box 31312, Lower Hutt (New Zealand); Canessa, S. [National Isotope Centre, GNS Science, P.O. Box 31312, Lower Hutt (New Zealand)

    2013-01-15

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of {sup 14}C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of {approx}500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of {sup 14}C to determine residence times, by estimating the amount of 'bomb {sup 14}C' incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point {sup 14}C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C ('passive fraction'), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  7. The stochastic system approach for estimating dynamic treatments effect.

    Science.gov (United States)

    Commenges, Daniel; Gégout-Petit, Anne

    2015-10-01

    The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.

  8. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  9. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    Science.gov (United States)

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  10. A multi-scalar PDF approach for LES of turbulent spray combustion

    Science.gov (United States)

    Raman, Venkat; Heye, Colin

    2011-11-01

    A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.

  11. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  12. The crux of the method: assumptions in ordinary least squares and logistic regression.

    Science.gov (United States)

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  13. Models of galaxies - The modal approach

    International Nuclear Information System (INIS)

    Lin, C.C.; Lowe, S.A.

    1990-01-01

    The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs

  14. Efficient pseudorandom generators based on the DDH assumption

    NARCIS (Netherlands)

    Rezaeian Farashahi, R.; Schoenmakers, B.; Sidorenko, A.; Okamoto, T.; Wang, X.

    2007-01-01

    A family of pseudorandom generators based on the decisional Diffie-Hellman assumption is proposed. The new construction is a modified and generalized version of the Dual Elliptic Curve generator proposed by Barker and Kelsey. Although the original Dual Elliptic Curve generator is shown to be

  15. Handbook of structural equation modeling

    CERN Document Server

    Hoyle, Rick H

    2012-01-01

    The first comprehensive structural equation modeling (SEM) handbook, this accessible volume presents both the mechanics of SEM and specific SEM strategies and applications. The editor, contributors, and editorial advisory board are leading methodologists who have organized the book to move from simpler material to more statistically complex modeling approaches. Sections cover the foundations of SEM; statistical underpinnings, from assumptions to model modifications; steps in implementation, from data preparation through writing the SEM report; and basic and advanced applications, inclu

  16. Modelling of Tip Vortex Cavitation for Engineering Applications in OpenFOAM

    NARCIS (Netherlands)

    Schot, J.J.A.; Pennings, P.C.; Pourquie, M.J.B.M.; Van Terwisga, T.J.C.

    2014-01-01

    In this paper modelling assumptions for the prediction of tip vortex flow and vortex cavitation with the RANS equations and homogeneous fluid approach in Open-FOAM are presented. The effects of the changes in the turbulence model are investigated and the results are compared with PIV measurements.

  17. Extension and Application of Credibility Models in Predicting Claim Frequency

    Directory of Open Access Journals (Sweden)

    Yuan-tao Xie

    2018-01-01

    Full Text Available In nonlife actuarial science, credibility models are one of the main methods of experience ratemaking. Bühlmann-Straub credibility model can be expressed as a special case of linear mixed models (LMMs with the underlying assumption of normality. In this paper, we extend the assumption of Bühlmann-Straub model to include Poisson and negative binomial distributions as they are more appropriate for describing the distribution of a number of claims. By using the framework of generalized linear mixed models (GLMMs, we obtain the generalized credibility premiums that contain as particular cases another credibility premium in the literature. Compared to generalized linear mixed models, our extended credibility models also have an advantage in that the credibility factor falls into the range from 0 to 1. The performance of our models in comparison with an existing model in the literature is also evaluated through numerical studies, which shows that our approach produces premium estimates close to the optima. In addition, our proposed model can also be applied to the most commonly used ratemaking approach, namely, the net, the optimal Bonus-Malus system.

  18. Evaporator modeling - A hybrid approach

    International Nuclear Information System (INIS)

    Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun

    2009-01-01

    In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis

  19. Towards representing human behavior and decision making in Earth system models. An overview of techniques and approaches

    NARCIS (Netherlands)

    Müller-Hansen, Finn; Schlüter, Maja; Maes, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst

    2017-01-01

    Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their

  20. Tank waste remediation system retrieval and disposal mission key enabling assumptions

    International Nuclear Information System (INIS)

    Baldwin, J.H.

    1998-01-01

    An overall systems approach has been applied to develop action plans to support the retrieval and immobilization waste disposal mission. The review concluded that the systems and infrastructure required to support the mission are known. Required systems are either in place or plans have been developed. An analysis of the programmatic, management and technical activities necessary to declare Readiness to Proceed with execution of the mission demonstrates that the system, people, and hardware will be on line and ready to support the private contractors. The systems approach included defining the retrieval and immobilized waste disposal mission requirements and evaluating the readiness of the TWRS contractor to supply waste feed to the private contractors in June 2002. The Phase 1 feed delivery requirements from the Private Contractor Request for Proposals were reviewed, transfer piping routes were mapped on it, existing systems were evaluated, and upgrade requirements were defined. Technical Basis Reviews were completed to define work scope in greater detail, cost estimates and associated year by year financial analyses were completed. Personnel training, qualifications, management systems and procedures were reviewed and shown to be in place and ready to support the Phase 1B mission. Key assumptions and risks that could negatively impact mission success were evaluated and appropriate mitigative actions plans were planned and scheduled

  1. Fracture network modelling: an integrated approach for realisation of complex fracture network geometries

    International Nuclear Information System (INIS)

    Srivastava, R.M.

    2007-01-01

    they consist of a family of equally likely renditions of fracture geometry, each one honouring the same surface and subsurface constraints. Such probabilistic models are well suited to studying issues involving risk assessment and quantification of uncertainty. This assists the exploration of geo-scientific uncertainty and how the inherent non-uniqueness of DCMs affects confidence in predictions of how the far-field geosphere affects overall safety of the proposed repository. The approach provides models that are systematic and traceable in the sense that all of the data, assumptions and parameter choices are clearly recorded and auditable. At the same time that subjective decisions are avoided, the various parameter choices still allow reasoned judgement from structural geology and geomechanics to constrain the model. By providing place-holders for such judgements, this approach moves this type of information from an undocumented constraint to a reviewable parameter choice. The technical consistency of these FNMs, their auditability and their visual and scientific realism all contribute to the presentation of geologic safety arguments that demonstrate good judgement, thereby increasing confidence in the entire modelling effort. (author)

  2. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    Science.gov (United States)

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  4. Electronic Cigarettes and Indoor Air Quality: A Simple Approach to Modeling Potential Bystander Exposures to Nicotine

    Science.gov (United States)

    Colard, Stéphane; O’Connell, Grant; Verron, Thomas; Cahours, Xavier; Pritchard, John D.

    2014-01-01

    There has been rapid growth in the use of electronic cigarettes (“vaping”) in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents. PMID:25547398

  5. Electronic Cigarettes and Indoor Air Quality: A Simple Approach to Modeling Potential Bystander Exposures to Nicotine

    Directory of Open Access Journals (Sweden)

    Stéphane Colard

    2014-12-01

    Full Text Available There has been rapid growth in the use of electronic cigarettes (“vaping” in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents.

  6. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  7. Validity of the mockwitness paradigm: testing the assumptions.

    Science.gov (United States)

    McQuiston, Dawn E; Malpass, Roy S

    2002-08-01

    Mockwitness identifications are used to provide a quantitative measure of lineup fairness. Some theoretical and practical assumptions of this paradigm have not been studied in terms of mockwitnesses' decision processes and procedural variation (e.g., instructions, lineup presentation method), and the current experiment was conducted to empirically evaluate these assumptions. Four hundred and eighty mockwitnesses were given physical information about a culprit, received 1 of 4 variations of lineup instructions, and were asked to identify the culprit from either a fair or unfair sequential lineup containing 1 of 2 targets. Lineup bias estimates varied as a result of lineup fairness and the target presented. Mockwitnesses generally reported that the target's physical description was their main source of identifying information. Our findings support the use of mockwitness identifications as a useful technique for sequential lineup evaluation, but only for mockwitnesses who selected only 1 lineup member. Recommendations for the use of this evaluation procedure are discussed.

  8. Educational Technology as a Subversive Activity: Questioning Assumptions Related to Teaching and Leading with Technology

    Science.gov (United States)

    Kruger-Ross, Matthew J.; Holcomb, Lori B.

    2012-01-01

    The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…

  9. On the Validity of the “Thin” and “Thick” Double-Layer Assumptions When Calculating Streaming Currents in Porous Media

    Directory of Open Access Journals (Sweden)

    Matthew D. Jackson

    2012-01-01

    Full Text Available We find that the thin double layer assumption, in which the thickness of the electrical diffuse layer is assumed small compared to the radius of curvature of a pore or throat, is valid in a capillary tubes model so long as the capillary radius is >200 times the double layer thickness, while the thick double layer assumption, in which the diffuse layer is assumed to extend across the entire pore or throat, is valid so long as the capillary radius is >6 times smaller than the double layer thickness. At low surface charge density (0.5 M the validity criteria are less stringent. Our results suggest that the thin double layer assumption is valid in sandstones at low specific surface charge (<10 mC⋅m−2, but may not be valid in sandstones of moderate- to small pore-throat size at higher surface charge if the brine concentration is low (<0.001 M. The thick double layer assumption is likely to be valid in mudstones at low brine concentration (<0.1 M and surface charge (<10 mC⋅m−2, but at higher surface charge, it is likely to be valid only at low brine concentration (<0.003 M. Consequently, neither assumption may be valid in mudstones saturated with natural brines.

  10. Equivalent Circuit Modeling of a Rotary Piezoelectric Motor

    DEFF Research Database (Denmark)

    El, Ghouti N.; Helbo, Jan

    2000-01-01

    In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about...... of the temperature on the mechanical resonance frequency is considered and thereby integrated in the final model for long term operations....

  11. Bivariate Gaussian bridges: directional factorization of diffusion in Brownian bridge models.

    Science.gov (United States)

    Kranstauber, Bart; Safi, Kamran; Bartumeus, Frederic

    2014-01-01

    In recent years high resolution animal tracking data has become the standard in movement ecology. The Brownian Bridge Movement Model (BBMM) is a widely adopted approach to describe animal space use from such high resolution tracks. One of the underlying assumptions of the BBMM is isotropic diffusive motion between consecutive locations, i.e. invariant with respect to the direction. Here we propose to relax this often unrealistic assumption by separating the Brownian motion variance into two directional components, one parallel and one orthogonal to the direction of the motion. Our new model, the Bivariate Gaussian bridge (BGB), tracks movement heterogeneity across time. Using the BGB and identifying directed and non-directed movement within a trajectory resulted in more accurate utilisation distributions compared to dynamic Brownian bridges, especially for trajectories with a non-isotropic diffusion, such as directed movement or Lévy like movements. We evaluated our model with simulated trajectories and observed tracks, demonstrating that the improvement of our model scales with the directional correlation of a correlated random walk. We find that many of the animal trajectories do not adhere to the assumptions of the BBMM. The proposed model improves accuracy when describing the space use both in simulated correlated random walks as well as observed animal tracks. Our novel approach is implemented and available within the "move" package for R.

  12. Post-Keynesyen Talep Yönelimli Büyüme Modelleri(Post-Keynesian Demand Oriented Growth Models

    Directory of Open Access Journals (Sweden)

    Yelda Bugay TEKGÜL

    2013-12-01

    Full Text Available Economic literature generally contains growth models dominated by classical/neo-classical approach. These models focused on the differences in the growth rates among countries in terms of supply of factors of production. Wheras capital accumulation and technical progress are viewed as the main determinant of the growth, increase in per-capita income is only determined by supply-side factors. Should the economy is in the position of under-employment and under-capacity, then these approaches are not capable of satisfactory explanation for the economic growth. Assumptions of supply-side approach are endogenous regarding the economic system and restricted by the demand. In an open economy, growth can be defined as a component of a Keynesian demand-oriented economic system and it is genarally called as Post-Keynesian growth models. These models have been developed on two main axes: “Export-led growth model”, introduced by N. Kaldor and “balance-of-payment constrained growth model”introduced by A.P.Thirwall in 1979. Export-led growth model is based on the assumption that internally determined productivity increases generate a virtous circle economy. Balance-of-payment constrained growth model, on the other hand, is based on the assumption that foreign trade deficit cannot continue forever and that long-run growth rate is a function of the export and elasticity of demand for import of the country. The objective of this paper is to discuss and critisize export-led growth and balance-of-payment constrained growth model.

  13. Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-01-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas, and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.

  14. Child Development Knowledge and Teacher Preparation: Confronting Assumptions.

    Science.gov (United States)

    Katz, Lilian G.

    This paper questions the widely held assumption that acquiring knowledge of child development is an essential part of teacher preparation and teaching competence, especially among teachers of young children. After discussing the influence of culture, parenting style, and teaching style on developmental expectations and outcomes, the paper asserts…

  15. A Janus-Faced Approach to Learning. A Critical Discussion of Habermas' Pragmatic Approach

    Science.gov (United States)

    Italia, Salvatore

    2017-01-01

    A realist approach to learning is what I propose here. This is based on a non-epistemic dimension whose presence is a necessary assumption for a concept of learning of a life-world as complementary to learning within a life-world. I develop my approach in opposition to Jürgen Habermas' pragmatic approach, which seems to lack of something from a…

  16. Climate Change: Implications for the Assumptions, Goals and Methods of Urban Environmental Planning

    Directory of Open Access Journals (Sweden)

    Kristina Hill

    2016-12-01

    -based quantitative models of regional system behavior that may soon be used to determine acceptable land uses. Finally, the philosophical assumptions that underlie urban environmental planning are changing to address new epistemological, ontological and ethical assumptions that support new methods and goals. The inability to use the past as a guide to the future, new prioritizations of values for adaptation, and renewed efforts to focus on intergenerational justice are provided as examples. In order to represent a genuine paradigm shift, this review argues that changes must begin to be evident across the underlying assumptions, conceptual frameworks, and methods of urban environmental planning, and be attributable to the same root cause. The examples presented here represent the early stages of a change in the overall paradigm of the discipline.

  17. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    Science.gov (United States)

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  18. Making Predictions about Chemical Reactivity: Assumptions and Heuristics

    Science.gov (United States)

    Maeyer, Jenine; Talanquer, Vicente

    2013-01-01

    Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…

  19. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  20. An Extension to Deng’s Entropy in the Open World Assumption with an Application in Sensor Data Fusion

    Directory of Open Access Journals (Sweden)

    Yongchuan Tang

    2018-06-01

    Full Text Available Quantification of uncertain degree in the Dempster-Shafer evidence theory (DST framework with belief entropy is still an open issue, even a blank field for the open world assumption. Currently, the existed uncertainty measures in the DST framework are limited to the closed world where the frame of discernment (FOD is assumed to be complete. To address this issue, this paper focuses on extending a belief entropy to the open world by considering the uncertain information represented as the FOD and the nonzero mass function of the empty set simultaneously. An extension to Deng’s entropy in the open world assumption (EDEOW is proposed as a generalization of the Deng’s entropy and it can be degenerated to the Deng entropy in the closed world wherever necessary. In order to test the reasonability and effectiveness of the extended belief entropy, an EDEOW-based information fusion approach is proposed and applied to sensor data fusion under uncertainty circumstance. The experimental results verify the usefulness and applicability of the extended measure as well as the modified sensor data fusion method. In addition, a few open issues still exist in the current work: the necessary properties for a belief entropy in the open world assumption, whether there exists a belief entropy that satisfies all the existed properties, and what is the most proper fusion frame for sensor data fusion under uncertainty.

  1. Dialogic or Dialectic? The Significance of Ontological Assumptions in Research on Educational Dialogue

    Science.gov (United States)

    Wegerif, Rupert

    2008-01-01

    This article explores the relationship between ontological assumptions and studies of educational dialogue through a focus on Bakhtin's "dialogic". The term dialogic is frequently appropriated to a modernist framework of assumptions, in particular the neo-Vygotskian or sociocultural tradition. However, Vygotsky's theory of education is dialectic,…

  2. Application of random survival forests in understanding the determinants of under-five child mortality in Uganda in the presence of covariates that satisfy the proportional and non-proportional hazards assumption.

    Science.gov (United States)

    Nasejje, Justine B; Mwambi, Henry

    2017-09-07

    Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the

  3. Supporting calculations and assumptions for use in WESF safetyanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  4. Adaptive Modeling of the International Space Station Electrical Power System

    Science.gov (United States)

    Thomas, Justin Ray

    2007-01-01

    Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.

  5.  Basic assumptions and definitions in the analysis of financial leverage

    Directory of Open Access Journals (Sweden)

    Tomasz Berent

    2015-12-01

    Full Text Available The financial leverage literature has been in a state of terminological chaos for decades as evidenced, for example, by the Nobel Prize Lecture mistake on the one hand, and the global financial crisis on the other. A meaningful analysis of the leverage phenomenon calls for the formulation of a coherent set of assumptions and basic definitions. The objective of the paper is to answer this call. The paper defines leverage as a value neutral concept useful in explaining the magnification effect exerted by financial activity upon the whole spectrum of financial results. By adopting constructivism as a methodological approach, we are able to introduce various types of leverage such as capital and income, base and non-base, accounting and market value, for levels and for distances (absolute and relative, costs and simple etc. The new definitions formulated here are subsequently adopted in the analysis of the content of leverage statements used by the leading finance textbook.

  6. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  7. Goal-oriented model adaptivity for viscous incompressible flows

    KAUST Repository

    van Opstal, T. M.

    2015-04-04

    © 2015, Springer-Verlag Berlin Heidelberg. In van Opstal et al. (Comput Mech 50:779–788, 2012) airbag inflation simulations were performed where the flow was approximated by Stokes flow. Inside the intricately folded initial geometry the Stokes assumption is argued to hold. This linearity assumption leads to a boundary-integral representation, the key to bypassing mesh generation and remeshing. It therefore enables very large displacements with near-contact. However, such a coarse assumption cannot hold throughout the domain, where it breaks down one needs to revert to the original model. The present work formalizes this idea. A model adaptive approach is proposed, in which the coarse model (a Stokes boundary-integral equation) is locally replaced by the original high-fidelity model (Navier–Stokes) based on a-posteriori estimates of the error in a quantity of interest. This adaptive modeling framework aims at taking away the burden and heuristics of manually partitioning the domain while providing new insight into the physics. We elucidate how challenges pertaining to model disparity can be addressed. Essentially, the solution in the interior of the coarse model domain is reconstructed as a post-processing step. We furthermore present a two-dimensional numerical experiments to show that the error estimator is reliable.

  8. The assumption of heterogeneous or homogeneous radioactive contamination in soil/sediment: does it matter in terms of the external exposure of fauna?

    International Nuclear Information System (INIS)

    Beaugelin-Seiller, K.

    2014-01-01

    The classical approach to environmental radioprotection is based on the assumption of homogeneously contaminated media. However, in soils and sediments there may be a significant variation of radioactivity with depth. The effect of this heterogeneity was investigated by examining the external exposure of various sediment and soil organisms, and determining the resulting dose rates, assuming a realistic combination of locations and radionuclides. The results were dependent on the exposure situation, i.e., the organism, its location, and the quality and quantity of radionuclides. The dose rates ranged over three orders of magnitude. The assumption of homogeneous contamination was not consistently conservative (if associated with a level of radioactivity averaged over the full thickness of soil or sediment that was sampled). Dose assessment for screening purposes requires consideration of the highest activity concentration measured in a soil/sediment that is considered to be homogeneously contaminated. A more refined assessment (e.g., higher tier of a graded approach) should take into consideration a more realistic contamination profile, and apply different dosimetric approaches. - Highlights: • Defining contamination as homogeneous may not be conservative for dose assessment. • The impact of source heterogeneity on dose is closely linked to the exposure scenario. • Dosimetric calculations (method and tool) should differ from screening to higher tiers

  9. Consistent Conformal Extensions of the Standard Model arXiv

    CERN Document Server

    Loebbert, Florian; Plefka, Jan

    The question of whether classically conformal modifications of the standard model are consistent with experimental obervations has recently been subject to renewed interest. The method of Gildener and Weinberg provides a natural framework for the study of the effective potential of the resulting multi-scalar standard model extensions. This approach relies on the assumption of the ordinary loop hierarchy $\\lambda_\\text{s} \\sim g^2_\\text{g}$ of scalar and gauge couplings. On the other hand, Andreassen, Frost and Schwartz recently argued that in the (single-scalar) standard model, gauge invariant results require the consistent scaling $\\lambda_\\text{s} \\sim g^4_\\text{g}$. In the present paper we contrast these two hierarchy assumptions and illustrate the differences in the phenomenological predictions of minimal conformal extensions of the standard model.

  10. Modeling reliability of power systems substations by using stochastic automata networks

    International Nuclear Information System (INIS)

    Šnipas, Mindaugas; Radziukynas, Virginijus; Valakevičius, Eimutis

    2017-01-01

    In this paper, stochastic automata networks (SANs) formalism to model reliability of power systems substations is applied. The proposed strategy allows reducing the size of state space of Markov chain model and simplifying system specification. Two case studies of standard configurations of substations are considered in detail. SAN models with different assumptions were created. SAN approach is compared with exact reliability calculation by using a minimal path set method. Modeling results showed that total independence of automata can be assumed for relatively small power systems substations with reliable equipment. In this case, the implementation of Markov chain model by a using SAN method is a relatively easy task. - Highlights: • We present the methodology to apply stochastic automata network formalism to create Markov chain models of power systems. • The stochastic automata network approach is combined with minimal path sets and structural functions. • Two models of substation configurations with different model assumptions are presented to illustrate the proposed methodology. • Modeling results of system with independent automata and functional transition rates are similar. • The conditions when total independence of automata can be assumed are addressed.

  11. Framework for determining airport daily departure and arrival delay thresholds: statistical modelling approach.

    Science.gov (United States)

    Wesonga, Ronald; Nabugoomu, Fabian

    2016-01-01

    The study derives a framework for assessing airport efficiency through evaluating optimal arrival and departure delay thresholds. Assumptions of airport efficiency measurements, though based upon minimum numeric values such as 15 min of turnaround time, cannot be extrapolated to determine proportions of delay-days of an airport. This study explored the concept of delay threshold to determine the proportion of delay-days as an expansion of the theory of delay and our previous work. Data-driven approach using statistical modelling was employed to a limited set of determinants of daily delay at an airport. For the purpose of testing the efficacy of the threshold levels, operational data for Entebbe International Airport were used as a case study. Findings show differences in the proportions of delay at departure (μ = 0.499; 95 % CI = 0.023) and arrival (μ = 0.363; 95 % CI = 0.022). Multivariate logistic model confirmed an optimal daily departure and arrival delay threshold of 60 % for the airport given the four probable thresholds {50, 60, 70, 80}. The decision for the threshold value was based on the number of significant determinants, the goodness of fit statistics based on the Wald test and the area under the receiver operating curves. These findings propose a modelling framework to generate relevant information for the Air Traffic Management relevant in planning and measurement of airport operational efficiency.

  12. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  13. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  14. A conceptual approach to approximate tree root architecture in infinite slope models

    Science.gov (United States)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  15. Approach to transverse equilibrium in axial channeling

    International Nuclear Information System (INIS)

    Fearick, R.W.

    2000-01-01

    Analytical treatments of channeling rely on the assumption of equilibrium on the transverse energy shell. The approach to equilibrium, and the nature of the equilibrium achieved, is examined using solutions of the equations of motion in the continuum multi-string model. The results show that the motion is chaotic in the absence of dissipative processes, and a complicated structure develops in phase space which prevent the development of the simple equilibrium usually assumed. The role of multiple scattering in smoothing out the equilibrium distribution is investigated

  16. Modeling Misbehavior in Cooperative Diversity: A Dynamic Game Approach

    Science.gov (United States)

    Dehnie, Sintayehu; Memon, Nasir

    2009-12-01

    Cooperative diversity protocols are designed with the assumption that terminals always help each other in a socially efficient manner. This assumption may not be valid in commercial wireless networks where terminals may misbehave for selfish or malicious intentions. The presence of misbehaving terminals creates a social-dilemma where terminals exhibit uncertainty about the cooperative behavior of other terminals in the network. Cooperation in social-dilemma is characterized by a suboptimal Nash equilibrium where wireless terminals opt out of cooperation. Hence, without establishing a mechanism to detect and mitigate effects of misbehavior, it is difficult to maintain a socially optimal cooperation. In this paper, we first examine effects of misbehavior assuming static game model and show that cooperation under existing cooperative protocols is characterized by a noncooperative Nash equilibrium. Using evolutionary game dynamics we show that a small number of mutants can successfully invade a population of cooperators, which indicates that misbehavior is an evolutionary stable strategy (ESS). Our main goal is to design a mechanism that would enable wireless terminals to select reliable partners in the presence of uncertainty. To this end, we formulate cooperative diversity as a dynamic game with incomplete information. We show that the proposed dynamic game formulation satisfied the conditions for the existence of perfect Bayesian equilibrium.

  17. Modeling Misbehavior in Cooperative Diversity: A Dynamic Game Approach

    Directory of Open Access Journals (Sweden)

    Sintayehu Dehnie

    2009-01-01

    Full Text Available Cooperative diversity protocols are designed with the assumption that terminals always help each other in a socially efficient manner. This assumption may not be valid in commercial wireless networks where terminals may misbehave for selfish or malicious intentions. The presence of misbehaving terminals creates a social-dilemma where terminals exhibit uncertainty about the cooperative behavior of other terminals in the network. Cooperation in social-dilemma is characterized by a suboptimal Nash equilibrium where wireless terminals opt out of cooperation. Hence, without establishing a mechanism to detect and mitigate effects of misbehavior, it is difficult to maintain a socially optimal cooperation. In this paper, we first examine effects of misbehavior assuming static game model and show that cooperation under existing cooperative protocols is characterized by a noncooperative Nash equilibrium. Using evolutionary game dynamics we show that a small number of mutants can successfully invade a population of cooperators, which indicates that misbehavior is an evolutionary stable strategy (ESS. Our main goal is to design a mechanism that would enable wireless terminals to select reliable partners in the presence of uncertainty. To this end, we formulate cooperative diversity as a dynamic game with incomplete information. We show that the proposed dynamic game formulation satisfied the conditions for the existence of perfect Bayesian equilibrium.

  18. Towards New Probabilistic Assumptions in Business Intelligence

    OpenAIRE

    Schumann Andrew; Szelc Andrzej

    2015-01-01

    One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot ...

  19. Model documentation, Renewable Fuels Module of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-01-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the Annual Energy Outlook 1998 (AEO98) forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. For AEO98, the RFM was modified in three principal ways, introducing capital cost elasticities of supply for new renewable energy technologies, modifying biomass supply curves, and revising assumptions for use of landfill gas from municipal solid waste (MSW). In addition, the RFM was modified in general to accommodate projections beyond 2015 through 2020. Two supply elasticities were introduced, the first reflecting short-term (annual) cost increases from manufacturing, siting, and installation bottlenecks incurred under conditions of rapid growth, and the second reflecting longer term natural resource, transmission and distribution upgrade, and market limitations increasing costs as more and more of the overall resource is used. Biomass supply curves were also modified, basing forest products supplies on production rather than on inventory, and expanding energy crop estimates to include states west of the Mississippi River using information developed by the Oak Ridge National Laboratory. Finally, for MSW, several assumptions for the use of landfill gas were revised and extended.

  20. Incorporation of constructivist assumptions into problem-based instruction: a literature review.

    Science.gov (United States)

    Kantar, Lina

    2014-05-01

    The purpose of this literature review was to explore the use of distinct assumptions of constructivism when studying the impact of problem-based learning (PBL) on learners in undergraduate nursing programs. Content analysis research technique. The literature review included information retrieved from sources selected via electronic databases, such as EBSCOhost, ProQuest, Sage Publications, SLACK Incorporation, Springhouse Corporation, and Digital Dissertations. The literature review was conducted utilizing key terms and phrases associated with problem-based learning in undergraduate nursing education. Out of the 100 reviewed abstracts, only 15 studies met the inclusion criteria for the review. Four constructivist assumptions based the review process allowing for analysis and evaluation of the findings, followed by identification of issues and recommendations for the discipline and its research practice in the field of PBL. This literature review provided evidence that the nursing discipline is employing PBL in its programs, yet with limited data supporting conceptions of the constructivist perspective underlying this pedagogical approach. Three major issues were assessed and formed the basis for subsequent recommendations: (a) limited use of a theoretical framework and absence of constructivism in most of the studies, (b) incompatibility between research measures and research outcomes, and (c) brief exposure to PBL during which the change was measured. Educators have made the right choice in employing PBL as a pedagogical practice, yet the need to base implementation on constructivism is mandatory if the aim is a better preparation of graduates for practice. Undeniably there is limited convincing evidence regarding integration of constructivism in nursing education. Research that assesses the impact of PBL on learners' problem-solving and communication skills, self-direction, and motivation is paramount. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  2. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  3. Accuracy and performance of 3D mask models in optical projection lithography

    Science.gov (United States)

    Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar

    2011-04-01

    Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.

  4. Halo-Independent Direct Detection Analyses Without Mass Assumptions

    CERN Document Server

    Anderson, Adam J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the $m_\\chi-\\sigma_n$ plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the $v_{min}-\\tilde{g}$ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from $v_{min}$ to nuclear recoil momentum ($p_R$), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call $\\tilde{h}(p_R)$. The entire family of conventional halo-independent $\\tilde{g}(v_{min})$ plots for all DM masses are directly found from the single $\\tilde{h}(p_R)$ plot through a simple re...

  5. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  6. Heterosexual assumptions in verbal and non-verbal communication in nursing.

    Science.gov (United States)

    Röndahl, Gerd; Innala, Sune; Carlsson, Marianne

    2006-11-01

    This paper reports a study of what lesbian women and gay men had to say, as patients and as partners, about their experiences of nursing in hospital care, and what they regarded as important to communicate about homosexuality and nursing. The social life of heterosexual cultures is based on the assumption that all people are heterosexual, thereby making homosexuality socially invisible. Nurses may assume that all patients and significant others are heterosexual, and these heteronormative assumptions may lead to poor communication that affects nursing quality by leading nurses to ask the wrong questions and make incorrect judgements. A qualitative interview study was carried out in the spring of 2004. Seventeen women and 10 men ranging in age from 23 to 65 years from different parts of Sweden participated. They described 46 experiences as patients and 31 as partners. Heteronormativity was communicated in waiting rooms, in patient documents and when registering for admission, and nursing staff sometimes showed perplexity when an informant deviated from this heteronormative assumption. Informants had often met nursing staff who showed fear of behaving incorrectly, which could lead to a sense of insecurity, thereby impeding further communication. As partners of gay patients, informants felt that they had to deal with heterosexual assumptions more than they did when they were patients, and the consequences were feelings of not being accepted as a 'true' relative, of exclusion and neglect. Almost all participants offered recommendations about how nursing staff could facilitate communication. Heterosexual norms communicated unconsciously by nursing staff contribute to ambivalent attitudes and feelings of insecurity that prevent communication and easily lead to misconceptions. Educational and management interventions, as well as increased communication, could make gay people more visible and thereby encourage openness and awareness by hospital staff of the norms that they

  7. A nonlinear approach of elastic reflection waveform inversion

    KAUST Repository

    Guo, Qiang

    2016-09-06

    Elastic full waveform inversion (EFWI) embodies the original intention of waveform inversion at its inception as it is a better representation of the mostly solid Earth. However, compared with the acoustic P-wave assumption, EFWI for P- and S-wave velocities using multi-component data admitted mixed results. Full waveform inversion (FWI) is a highly nonlinear problem and this nonlinearity only increases under the elastic assumption. Reflection waveform inversion (RWI) can mitigate the nonlinearity by relying on transmissions from reflections focused on inverting low wavenumber components of the model. In our elastic endeavor, we split the P- and S-wave velocities into low wavenumber and perturbation components and propose a nonlinear approach to invert for both of them. The new optimization problem is built on an objective function that depends on both background and perturbation models. We utilize an equivalent stress source based on the model perturbation to generate reflection instead of demigrating from an image, which is applied in conventional RWI. Application on a slice of an ocean-bottom data shows that our method can efficiently update the low wavenumber parts of the model, but more so, obtain perturbations that can be added to the low wavenumbers for a high resolution output.

  8. A nonlinear approach of elastic reflection waveform inversion

    KAUST Repository

    Guo, Qiang; Alkhalifah, Tariq Ali

    2016-01-01

    Elastic full waveform inversion (EFWI) embodies the original intention of waveform inversion at its inception as it is a better representation of the mostly solid Earth. However, compared with the acoustic P-wave assumption, EFWI for P- and S-wave velocities using multi-component data admitted mixed results. Full waveform inversion (FWI) is a highly nonlinear problem and this nonlinearity only increases under the elastic assumption. Reflection waveform inversion (RWI) can mitigate the nonlinearity by relying on transmissions from reflections focused on inverting low wavenumber components of the model. In our elastic endeavor, we split the P- and S-wave velocities into low wavenumber and perturbation components and propose a nonlinear approach to invert for both of them. The new optimization problem is built on an objective function that depends on both background and perturbation models. We utilize an equivalent stress source based on the model perturbation to generate reflection instead of demigrating from an image, which is applied in conventional RWI. Application on a slice of an ocean-bottom data shows that our method can efficiently update the low wavenumber parts of the model, but more so, obtain perturbations that can be added to the low wavenumbers for a high resolution output.

  9. Assumptions of Customer Knowledge Enablement in the Open Innovation Process

    Directory of Open Access Journals (Sweden)

    Jokubauskienė Raminta

    2017-08-01

    Full Text Available In the scientific literature, open innovation is one of the most effective means to innovate and gain a competitive advantage. In practice, there is a variety of open innovation activities, but, nevertheless, customers stand as the cornerstone in this area, since the customers’ knowledge is one of the most important sources of new knowledge and ideas. Evaluating the context where are the interactions of open innovation and customer knowledge enablement, it is necessary to take into account the importance of customer knowledge management. Increasingly it is highlighted that customers’ knowledge management facilitates the creation of innovations. However, it should be an examination of other factors that influence the open innovation, and, at the same time, customers’ knowledge management. This article presents a theoretical model, which reveals the assumptions of open innovation process and the impact on the firm’s performance.

  10. Simultaneous genetic analysis of longitudinal means and covariance structure in the simplex model using twin data

    NARCIS (Netherlands)

    Dolan, C.V.; Molenaar, P.C.M.; Boomsma, D.I.

    1991-01-01

    D. Soerbom's (1974, 1976) simplex model approach to simultaneous analysis of means and covariance structure was applied to analysis of means observed in a single group. The present approach to the simultaneous biometric analysis of covariance and mean structure is based on the testable assumption

  11. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  12. Observing gravitational-wave transient GW150914 with minimal assumptions

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwa, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. C.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brocki, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderon Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; DeRosa, R. T.; Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. R.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritsche, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; de Haas, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinder, I.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijhunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Laguna, P.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, R.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mende, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Page, J.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prolchorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shithriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlhruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, R. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be

  13. Oil price assumptions in macroeconomic forecasts: should we follow future market expectations?

    International Nuclear Information System (INIS)

    Coimbra, C.; Esteves, P.S.

    2004-01-01

    In macroeconomic forecasting, in spite of its important role in price and activity developments, oil prices are usually taken as an exogenous variable, for which assumptions have to be made. This paper evaluates the forecasting performance of futures market prices against the other popular technical procedure, the carry-over assumption. The results suggest that there is almost no difference between opting for futures market prices or using the carry-over assumption for short-term forecasting horizons (up to 12 months), while, for longer-term horizons, they favour the use of futures market prices. However, as futures market prices reflect market expectations for world economic activity, futures oil prices should be adjusted whenever market expectations for world economic growth are different to the values underlying the macroeconomic scenarios, in order to fully ensure the internal consistency of those scenarios. (Author)

  14. Analysis On Political Speech Of Susilo Bambang Yudhoyono: Common Sense Assumption And Ideology

    Directory of Open Access Journals (Sweden)

    Sayit Abdul Karim

    2015-10-01

    Full Text Available This paper presents an analysis on political speech of Susilo Bambang Yudhoyono (SBY, the former president of Indonesia at the Indonesian conference on “Moving towards sustainability: together we must create the future we want”. Ideologies are closely linked to power and language because using language is the commonest form of social behavior, and the form of social behavior where we rely most on ‘common-sense’ assumptions. The objectives of this study are to discuss the common sense assumption and ideology by means of language use in SBY’s political speech which is mainly grounded in Norman Fairclough’s theory of language and power in critical discourse analysis. There are two main problems of analysis, namely; first, what are the common sense assumption and ideology in Susilo Bambang Yudhoyono’s political speech; and second, how do they relate to each other in the political discourse? The data used in this study was in the form of written text on “moving towards sustainability: together we must create the future we want”. A qualitative descriptive analysis was employed to analyze the common sense assumption and ideology in the written text of Susilo Bambang Yudhoyono’s political speech which was delivered at Riocto entro Convention Center, Rio de Janeiro on June 20, 2012. One dimension of ‘common sense’ is the meaning of words. The results showed that the common sense assumption and ideology conveyed through SBY’s specific words or expressions can significantly explain how political discourse is constructed and affected by the SBY’s rule and position, life experience, and power relations. He used language as a powerful social tool to present his common sense assumption and ideology to convince his audiences and fellow citizens that the future of sustainability has been an important agenda for all people.

  15. Moving Beyond the Systems Approach in SCM and Logistics Research

    DEFF Research Database (Denmark)

    Nilsson, Fredrik; Gammelgaard, Britta

    2012-01-01

    Purpose – The purpose of this paper is to provide a paradigmatic reflection on theoretical approaches recently identified in logistics and supply chain management (SCM); namely complex adaptive systems and complexity thinking, and to compare it to the dominant approach in logistics and SCM research......, namely the systems approach. By analyzing the basic assumptions of the three approaches, SCM and logistics researchers are guided in their choice of research approaches which increases their awareness of the consequences different approaches have on theory and practice. Design...... to the dominant approach in SCM and logistics research, the systems approach, it is concluded that the underlying assumptions of complex adaptive systems and complexity thinking are more appropriate than systems approach for contemporary challenges of organizational complexity in SCM and logistics. It is found...

  16. Assessment of Constraint Effects based on Local Approach

    International Nuclear Information System (INIS)

    Lee, Tae Rin; Chang, Yoon Suk; Choi, Jae Boong; Seok, Chang Sung; Kim, Young Jin

    2005-01-01

    Traditional fracture mechanics has been used to ensure a structural integrity, in which the geometry independence is assumed in crack tip deformation and fracture toughness. However, the assumption is applicable only within limited conditions. To address fracture covering a broad range of loading and crack geometries, two-parameter global approach and local approach have been proposed. The two-parameter global approach can quantify the load and crack geometry effects by adopting T-stress or Q-parameter but time-consuming and expensive since lots of experiments and finite element (FE) analyses are necessary. On the other hand, the local approach evaluates the load and crack geometry effects based on damage model. Once material specific fitting constants are determined from a few experiments and FE analyses, the fracture resistance characteristics can be obtained by numerical simulation. The purpose of this paper is to investigate constraint effects for compact tension (CT) specimens with different in-plane or out-of-plane size using local approach. Both modified GTN model and Rousselier model are adopted to examine the ductile fracture behavior of SA515 Gr.60 carbon steel at high temperature. The fracture resistance (J-R) curves are estimated through numerical analysis, compared with corresponding experimental results and, then, crack length, thickness and side-groove effects are evaluated

  17. Nonequilibrium pressurizer model; Model za neravnotezne uslove u sudu za odrzavanje pritiska

    Energy Technology Data Exchange (ETDEWEB)

    Stevanovic, V; Studovic, M [masinski fakultet, Beograd (Yugoslavia)

    1984-07-01

    The paper represents a nonequilibrium pressurizer model developed at the Faculty of Mechanical engineering as a sub model of complete NSSS model for predicting behaviour of corresponding components under transient conditions. Apart from other approaches, developed model was started with assumption that governing processes in pressurizer behaviour are interfaces heat and mass transfer processes. Such procedure has difficulties with information about values of interfaces and thermodynamic potential for mass and energy transfer across interfaces, during thermodynamic nonequilibrium state of vapour and liquid. To overcome these difficulties it was introduced the mass and energy parameters which successfully solve this problem. The model was verified with several analytical and experimental results. (author)

  18. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...

  19. Modelling UK energy demand to 2000

    International Nuclear Information System (INIS)

    Thomas, S.D.

    1980-01-01

    A recent long-term demand forecast for the UK was made by Cheshire and Surrey. (SPRU Occasional Paper Series No.5, Science Policy Research Unit, Univ. Of Sussex, 1978.) Although they adopted a sectoral approach their study leaves some questions unanswered. Do they succeed in their aim of making all their assumptions fully explicit. How sensitive are their estimates to changes in assumptions and policies. Are important problems and 'turning points' fully identified in the period up to and immediately beyond their time horizon of 2000. The author addresses these questions by using a computer model based on the study by Cheshire and Surrey. This article is a shortened version of the report, S.D. Thomas, 'Modelling UK Energy Demand to 2000', Operational Research, Univ. of Sussex, Brighton, UK, 1979, in which full details of the author's model are given. Copies are available from the author. (author)

  20. Modelling UK energy demand to 2000

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, S D [Sussex Univ., Brighton (UK)

    1980-03-01

    A recent long-term demand forecast for the UK was made by Cheshire and Surrey. (SPRU Occasional Paper Series No.5, Science Policy Research Unit, Univ. Of Sussex, 1978.) Although they adopted a sectoral approach their study leaves some questions unanswered. Do they succeed in their aim of making all their assumptions fully explicit. How sensitive are their estimates to changes in assumptions and policies. Are important problems and 'turning points' fully identified in the period up to and immediately beyond their time horizon of 2000. The author addresses these questions by using a computer model based on the study by Cheshire and Surrey. This article is a shortened version of the report, S.D. Thomas, 'Modelling UK Energy Demand to 2000', Operational Research, Univ. of Sussex, Brighton, UK, 1979, in which full details of the author's model are given. Copies are available from the author.

  1. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    Science.gov (United States)

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  2. Robust Measurement via A Fused Latent and Graphical Item Response Theory Model.

    Science.gov (United States)

    Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Ying, Zhiliang

    2018-03-12

    Item response theory (IRT) plays an important role in psychological and educational measurement. Unlike the classical testing theory, IRT models aggregate the item level information, yielding more accurate measurements. Most IRT models assume local independence, an assumption not likely to be satisfied in practice, especially when the number of items is large. Results in the literature and simulation studies in this paper reveal that misspecifying the local independence assumption may result in inaccurate measurements and differential item functioning. To provide more robust measurements, we propose an integrated approach by adding a graphical component to a multidimensional IRT model that can offset the effect of unknown local dependence. The new model contains a confirmatory latent variable component, which measures the targeted latent traits, and a graphical component, which captures the local dependence. An efficient proximal algorithm is proposed for the parameter estimation and structure learning of the local dependence. This approach can substantially improve the measurement, given no prior information on the local dependence structure. The model can be applied to measure both a unidimensional latent trait and multidimensional latent traits.

  3. On a model-based approach to radiation protection

    International Nuclear Information System (INIS)

    Waligorski, M.P.R.

    2002-01-01

    There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)

  4. Mathematical Modeling Approaches in Plant Metabolomics.

    Science.gov (United States)

    Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas

    2018-01-01

    The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.

  5. Bayesian analysis of overdispersed chromosome aberration data with the negative binomial model

    International Nuclear Information System (INIS)

    Brame, R.S.; Groer, P.G.

    2002-01-01

    The usual assumption of a Poisson model for the number of chromosome aberrations in controlled calibration experiments implies variance equal to the mean. However, it is known that chromosome aberration data from experiments involving high linear energy transfer radiations can be overdispersed, i.e. the variance is greater than the mean. Present methods for dealing with overdispersed chromosome data rely on frequentist statistical techniques. In this paper, the problem of overdispersion is considered from a Bayesian standpoint. The Bayes Factor is used to compare Poisson and negative binomial models for two previously published calibration data sets describing the induction of dicentric chromosome aberrations by high doses of neutrons. Posterior densities for the model parameters, which characterise dose response and overdispersion are calculated and graphed. Calibrative densities are derived for unknown neutron doses from hypothetical radiation accident data to determine the impact of different model assumptions on dose estimates. The main conclusion is that an initial assumption of a negative binomial model is the conservative approach to chromosome dosimetry for high LET radiations. (author)

  6. Vector autoregressive model approach for forecasting outflow cash in Central Java

    Science.gov (United States)

    hoyyi, Abdul; Tarno; Maruddani, Di Asih I.; Rahmawati, Rita

    2018-05-01

    Multivariate time series model is more applied in economic and business problems as well as in other fields. Applications in economic problems one of them is the forecasting of outflow cash. This problem can be viewed globally in the sense that there is no spatial effect between regions, so the model used is the Vector Autoregressive (VAR) model. The data used in this research is data on the money supply in Bank Indonesia Semarang, Solo, Purwokerto and Tegal. The model used in this research is VAR (1), VAR (2) and VAR (3) models. Ordinary Least Square (OLS) is used to estimate parameters. The best model selection criteria use the smallest Akaike Information Criterion (AIC). The result of data analysis shows that the AIC value of VAR (1) model is equal to 42.72292, VAR (2) equals 42.69119 and VAR (3) equals 42.87662. The difference in AIC values is not significant. Based on the smallest AIC value criteria, the best model is the VAR (2) model. This model has satisfied the white noise assumption.

  7. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  8. A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.

    Science.gov (United States)

    Houseman, E Andres; Virji, M Abbas

    2017-08-01

    Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates

  9. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    OpenAIRE

    Hazim Adnan Hashim; Rosli Bin Talif; Lina Hameed Ali

    2016-01-01

    The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led man...

  10. Commentary: Considering Assumptions in Associations Between Music Preferences and Empathy-Related Responding

    Directory of Open Access Journals (Sweden)

    Susan A O'Neill

    2015-09-01

    Full Text Available This commentary considers some of the assumptions underpinning the study by Clark and Giacomantonio (2015. Their exploratory study examined relationships between young people's music preferences and their cognitive and affective empathy-related responses. First, the prescriptive assumption that music preferences can be measured according to how often an individual listens to a particular music genre is considered within axiology or value theory as a multidimensional construct (general, specific, and functional values. This is followed by a consideration of the causal assumption that if we increase young people's empathy through exposure to prosocial song lyrics this will increase their prosocial behavior. It is suggested that the predictive power of musical preferences on empathy-related responding might benefit from a consideration of the larger pattern of psychological and subjective wellbeing within the context of developmental regulation across ontogeny that involves mutually influential individual—context relations.

  11. Tank waste remediation system retrieval and disposal mission key enabling assumptions

    International Nuclear Information System (INIS)

    Baldwin, J.H.

    1998-01-01

    An overall systems approach has been applied to develop action plans to support the retrieval and immobilization waste disposal mission. The review concluded that the systems and infrastructure required to support the mission are known. Required systems are either in place or plans have been developed to ensure they exist when needed. The review showed that since October 1996 a robust system engineering approach to establishing integrated Technical Baselines, work breakdown structures, tank farm structure and configurations and work scope and costs has been established itself as part of the culture within TWRS. An analysis of the programmatic, management and technical activities necessary to declare readiness to proceed with execution of the mission demonstrates that the system, people and hardware will be on line and ready to support the private contractors. The systems approach included defining the retrieval and immobilized waste disposal mission requirements and evaluating the readiness of the TWRS contractor to supply waste feed to the private contractors in June 2OO2. The Phase 1 feed delivery requirements from the Private Contractor Request for Proposals were reviewed. Transfer piping routes were mapped out, existing systems were evaluated, and upgrade requirements were defined. Technical Basis Reviews were completed to define work scope in greater detail, cost estimates and associated year by year financial analyses were completed. TWRS personnel training, qualifications, management systems and procedures were reviewed and shown to be in place and ready to support the Phase 1B mission. Key assumptions and risks that could negatively impact mission success were evaluated and appropriate mitigative actions plans were planned and scheduled

  12. A comparison and assessment of approaches for modelling flow over in-line tube banks

    International Nuclear Information System (INIS)

    Iacovides, Hector; Launder, Brian; West, Alastair

    2014-01-01

    Highlights: • We present wall-resolved LES and URANS simulations of periodic flow in heated in-line tube banks. • Simulations of flow in a confined in-line tube-bank are compared with experimental data. • When pitch-to-diameter (P/D) ratio becomes less than 1.6, the periodic flow becomes skewed. • URANS tested here unable to mimic the periodic flow at P/D = 1.6. • In confined tube banks URANS suggest alternate, in the axial direction, flow deflection. - Abstract: The paper reports experiences from applying alternative strategies for modelling turbulent flow and local heat-transfer coefficients around in-line tube banks. The motivation is the simulation of conditions in the closely packed cross-flow heat exchangers used in advanced gas-cooled nuclear reactors (AGRs). The main objective is the flow simulation in large-scale tube banks with confining walls. The suitability and accuracy of wall-resolved large-eddy simulation (LES) and Unsteady Reynolds-Averaged Navier–Stokes (URANS) approaches are examined for generic, square, in-line tube banks, where experimental data are limited but available. Within the latter approach, both eddy-viscosity and Reynolds-stress-transport models have been tested. The assumption of flow periodicity in all three directions is investigated by varying the domain size. It is found that the path taken by the fluid through the tube-bank configuration differs according to the treatment of turbulence and whether the flow is treated as two- or three-dimensional. Finally, the important effect of confining walls has been examined by making direct comparison with the experiments of the complete test rig of Aiba et al. (1982)

  13. Study of the behaviour of trace elements in estuaries: experimental approaches and modeling

    International Nuclear Information System (INIS)

    Dange, Catherine

    2002-01-01

    the biogeochemistry of Cd, Co and Cs in the estuarine environment and the knowledge obtained on the field. Experiments performed both in laboratory and in situ were necessary to check the validity of the assumptions of the model and to evaluate model parameters, which cannot be measured directly like to the sorption properties of natural particles. Radiotracers ("1"0"9Cd, "5"7Co,"1"3"4Cs) were used to determine physico-chemical key processes and environmental variables that control the speciation and the fate of Cd, Co and Cs. This approach, based on the use of spike with various radionuclides, allowed us to evaluate the affinity constants of particles to the four estuaries for the studied metals (global intrinsic complexation and exchange constants) and also the exchangeable particulate fraction estimated from the comparison of measured natural metals coefficients of distribution and coefficient of distribution of their radioactive equivalents. Other parameters, which are necessary to build the model (specific surface area, concentration of active surface sites, mean intrinsic acid-base constants,...), were independently estimated by various experimental approaches, applied in laboratory to particle samples taken throughout estuaries (electrochemical measurements, nitrogen adsorption using the BET method,...). The results of the validation indicate that in spite of its simplifications, the model reproduces in a satisfactory way the dissolved/particulate distributions measured for Cd, Co and Cs. With a predictive aim, this type of model must be coupled with a hydro-sedimentary transport model. (author)

  14. Strong exploration of a cast iron pipe failure model

    International Nuclear Information System (INIS)

    Moglia, M.; Davis, P.; Burn, S.

    2008-01-01

    A physical probabilistic failure model for buried cast iron pipes is described, which is based on the fracture mechanics of the pipe failure process. Such a model is useful in the asset management of buried pipelines. The model is then applied within a Monte-Carlo simulation framework after adding stochasticity to input variables. Historical failure rates are calculated based on a database of 81,595 pipes and their recorded failures, and model parameters are chosen to provide the best fit between historical and predicted failure rates. This provides an estimated corrosion rate distribution, which agrees well with experimental results. The first model design was chosen in a deliberate simplistic fashion in order to allow for further strong exploration of model assumptions. Therefore, first runs of the initial model resulted in a poor quantitative and qualitative fit in regards to failure rates. However, by exploring natural additional assumptions such as relating to stochastic loads, a number of assumptions were chosen which improved the model to a stage where an acceptable fit was achieved. The model bridges the gap between micro- and macro-level, and this is the novelty in the approach. In this model, data can be used both from the macro-level in terms of failure rates, as well as from the micro-level such as in terms of corrosion rates

  15. Questionable assumptions hampered interpretation of a network meta-analysis of primary care depression treatments.

    Science.gov (United States)

    Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente

    2016-03-01

    We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Linear-No-Threshold Default Assumptions for Noncancer and Nongenotoxic Cancer Risks: A Mathematical and Biological Critique.

    Science.gov (United States)

    Bogen, Kenneth T

    2016-03-01

    To improve U.S. Environmental Protection Agency (EPA) dose-response (DR) assessments for noncarcinogens and for nonlinear mode of action (MOA) carcinogens, the 2009 NRC Science and Decisions Panel recommended that the adjustment-factor approach traditionally applied to these endpoints should be replaced by a new default assumption that both endpoints have linear-no-threshold (LNT) population-wide DR relationships. The panel claimed this new approach is warranted because population DR is LNT when any new dose adds to a background dose that explains background levels of risk, and/or when there is substantial interindividual heterogeneity in susceptibility in the exposed human population. Mathematically, however, the first claim is either false or effectively meaningless and the second claim is false. Any dose-and population-response relationship that is statistically consistent with an LNT relationship may instead be an additive mixture of just two quasi-threshold DR relationships, which jointly exhibit low-dose S-shaped, quasi-threshold nonlinearity just below the lower end of the observed "linear" dose range. In this case, LNT extrapolation would necessarily overestimate increased risk by increasingly large relative magnitudes at diminishing values of above-background dose. The fact that chemically-induced apoptotic cell death occurs by unambiguously nonlinear, quasi-threshold DR mechanisms is apparent from recent data concerning this quintessential toxicity endpoint. The 2009 NRC Science and Decisions Panel claims and recommendations that default LNT assumptions be applied to DR assessment for noncarcinogens and nonlinear MOA carcinogens are therefore not justified either mathematically or biologically. © 2015 The Author. Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  17. Exploring five common assumptions on Attention Deficit Hyperactivity Disorder

    NARCIS (Netherlands)

    Batstra, Laura; Nieweg, Edo H.; Hadders-Algra, Mijna

    The number of children diagnosed with attention deficit hyperactivity disorder (ADHD) and treated with medication is steadily increasing. The aim of this paper was to critically discuss five debatable assumptions on ADHD that may explain these trends to some extent. These are that ADHD (i) causes

  18. A Test of the Optimality Approach to Modelling Canopy gas Exchange by Natural Vegetation

    Science.gov (United States)

    Schymanski, S. J.; Sivapalan, M.; Roderick, M. L.; Beringer, J.; Hutley, L. B.

    2005-12-01

    Natural vegetation has co-evolved with its environment over a long period of time and natural selection has led to a species composition that is most suited for the given conditions. Part of this adaptation is the vegetation's water use strategy, which determines the amount and timing of water extraction from the soil. Knowing that water extraction by vegetation often accounts for over 90% of the annual water balance in some places, we need to understand its controls if we want to properly model the hydrologic cycle. Water extraction by roots is driven by transpiration from the canopy, which in turn is an inevitable consequence of CO2 uptake for photosynthesis. Photosynthesis provides plants with their main building material, carbohydrates, and with the energy necessary to thrive and prosper in their environment. Therefore we expect that natural vegetation would have evolved an optimal water use strategy to maximise its `net carbon profit' (the difference between carbon acquired by photosynthesis and carbon spent on maintenance of the organs involved in its uptake). Based on this hypothesis and on an ecophysiological gas exchange and photosynthesis model (Cowan and Farquhar 1977; von Caemmerer 2000), we model the optimal vegetation for a site in Howard Springs (N.T., Australia) and compare the modelled fluxes with measurements by Beringer, Hutley et al. (2003). The comparison gives insights into theoretical and real controls on transpiration and photosynthesis and tests the optimality approach to modelling gas exchange of natural vegetation with unknown properties. The main advantage of the optimality approach is that no assumptions about the particular vegetation on a site are needed, which makes it very powerful for predicting vegetation response to long-term climate- or land use change. Literature: Beringer, J., L. B. Hutley, et al. (2003). "Fire impacts on surface heat, moisture and carbon fluxes from a tropical savanna in northern Australia." International

  19. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Science.gov (United States)

    2011-01-01

    Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model) can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing processes. A complete

  20. A piecewise modeling approach for climate sensitivity studies: Tests with a shallow-water model

    Science.gov (United States)

    Shao, Aimei; Qiu, Chongjian; Niu, Guo-Yue

    2015-10-01

    In model-based climate sensitivity studies, model errors may grow during continuous long-term integrations in both the "reference" and "perturbed" states and hence the climate sensitivity (defined as the difference between the two states). To reduce the errors, we propose a piecewise modeling approach that splits the continuous long-term simulation into subintervals of sequential short-term simulations, and updates the modeled states through re-initialization at the end of each subinterval. In the re-initialization processes, this approach updates the reference state with analysis data and updates the perturbed states with the sum of analysis data and the difference between the perturbed and the reference states, thereby improving the credibility of the modeled climate sensitivity. We conducted a series of experiments with a shallow-water model to evaluate the advantages of the piecewise approach over the conventional continuous modeling approach. We then investigated the impacts of analysis data error and subinterval length used in the piecewise approach on the simulations of the reference and perturbed states as well as the resulting climate sensitivity. The experiments show that the piecewise approach reduces the errors produced by the conventional continuous modeling approach, more effectively when the analysis data error becomes smaller and the subinterval length is shorter. In addition, we employed a nudging assimilation technique to solve possible spin-up problems caused by re-initializations by using analysis data that contain inconsistent errors between mass and velocity. The nudging technique can effectively diminish the spin-up problem, resulting in a higher modeling skill.

  1. Implicit Assumptions in Special Education Policy: Promoting Full Inclusion for Students with Learning Disabilities

    Science.gov (United States)

    Kirby, Moira

    2017-01-01

    Introduction: Everyday millions of students in the United States receive special education services. Special education is an institution shaped by societal norms. Inherent in these norms are implicit assumptions regarding disability and the nature of special education services. The two dominant implicit assumptions evident in the American…

  2. Models for comparing lung-cancer risks in radon- and plutonium-exposed experimental animals

    International Nuclear Information System (INIS)

    Gilbert, E.S.; Cross, F.T.; Sanders, C.L.; Dagle, G.E.

    1990-10-01

    Epidemiologic studies of radon-exposed underground miners have provided the primary basis for estimating human lung-cancer risks resulting from radon exposure. These studies are sometimes used to estimate lung-cancer risks resulting from exposure to other alpha- emitters as well. The latter use, often referred to as the dosimetric approach, is based on the assumption that a specified dose to the lung produces the same lung-tumor risk regardless of the substance producing the dose. At Pacific Northwest Laboratory, experiments have been conducted in which laboratory rodents have been given inhalation exposures to radon and to plutonium ( 239 PuO 2 ). These experiments offer a unique opportunity to compare risks, and thus to investigate the validity of the dosimetric approach. This comparison is made most effectively by modeling the age-specific risk as a function of dose in a way that is comparable to analyses of human data. Such modeling requires assumptions about whether tumors are the cause of death or whether they are found incidental to death from other causes. Results based on the assumption that tumors are fatal indicate that the radon and plutonium dose-response curves differ, with a linear function providing a good description of the radon data, and a pure quadratic function providing a good description of the plutonium data. However, results based on the assumption that tumors are incidental to death indicate that the dose-response curves for the two exposures are very similar, and thus support the dosimetric approach. 14 refs., 2 figs., 6 tabs

  3. Uniform background assumption produces misleading lung EIT images.

    Science.gov (United States)

    Grychtol, Bartłomiej; Adler, Andy

    2013-06-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes.

  4. Uniform background assumption produces misleading lung EIT images

    International Nuclear Information System (INIS)

    Grychtol, Bartłomiej; Adler, Andy

    2013-01-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes. (paper)

  5. Methodological notes on model comparisons and strategy classification: A falsificationist proposition

    OpenAIRE

    Morten Moshagen; Benjamin E. Hilbig

    2011-01-01

    Taking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic example...

  6. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    Science.gov (United States)

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  7. Nonperturbative approach to the attractive Hubbard model

    International Nuclear Information System (INIS)

    Allen, S.; Tremblay, A.-M. S.

    2001-01-01

    A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)

  8. Does Artificial Neural Network Support Connectivism's Assumptions?

    Science.gov (United States)

    AlDahdouh, Alaa A.

    2017-01-01

    Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement…

  9. Anti-Atheist Bias in the United States: Testing Two Critical Assumptions

    Directory of Open Access Journals (Sweden)

    Lawton K Swan

    2012-02-01

    Full Text Available Decades of opinion polling and empirical investigations have clearly demonstrated a pervasive anti-atheist prejudice in the United States. However, much of this scholarship relies on two critical and largely unaddressed assumptions: (a that when people report negative attitudes toward atheists, they do so because they are reacting specifically to their lack of belief in God; and (b that survey questions asking about attitudes toward atheists as a group yield reliable information about biases against individual atheist targets. To test these assumptions, an online survey asked a probability-based random sample of American adults (N = 618 to evaluate a fellow research participant (“Jordan”. Jordan garnered significantly more negative evaluations when identified as an atheist than when described as religious or when religiosity was not mentioned. This effect did not differ as a function of labeling (“atheist” versus “no belief in God”, or the amount of individuating information provided about Jordan. These data suggest that both assumptions are tenable: nonbelief—rather than extraneous connotations of the word “atheist”—seems to underlie the effect, and participants exhibited a marked bias even when confronted with an otherwise attractive individual.

  10. Sensitivity of the OMI ozone profile retrieval (OMO3PR) to a priori assumptions

    NARCIS (Netherlands)

    Mielonen, T.; De Haan, J.F.; Veefkind, J.P.

    2014-01-01

    We have assessed the sensitivity of the operational OMI ozone profile retrieval (OMO3PR) algorithm to a number of a priori assumptions. We studied the effect of stray light correction, surface albedo assumptions and a priori ozone profiles on the retrieved ozone profile. Then, we studied how to

  11. Process Mining: A Two-Step Approach to Balance Between Underfitting and Overfitting

    DEFF Research Database (Denmark)

    van der Aalst, W.M.P.; Rubin, V.; Verbeek, H.M.W.

    behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such "overfitting" by generalizing the model to allow for more...... behavior. This generalization is often driven by the representation language and very crude assumptions about completeness. As a result, parts of the model are "overfitting" (allow only what has actually been observed) while other parts may be "underfitting" (allow for much more behavior without strong...... support for it). None of the existing techniques enables the user to control the balance between "overfitting" and "underfitting". To address this, we propose a two-step approach. First, using a configurable approach, a transition system is constructed. Then, using the "theory of regions", the model...

  12. Modelling flow dynamics in water distribution networks using ...

    African Journals Online (AJOL)

    One such approach is the Artificial Neural Networks (ANNs) technique. The advantage of ANNs is that they are robust and can be used to model complex linear and non-linear systems without making implicit assumptions. ANNs can be trained to forecast flow dynamics in a water distribution network. Such flow dynamics ...

  13. THE COMPLEX OF ASSUMPTION CATHEDRAL OF THE ASTRAKHAN KREMLIN

    Directory of Open Access Journals (Sweden)

    Savenkova Aleksandra Igorevna

    2016-08-01

    Full Text Available This article is devoted to an architectural and historical analysis of the constructions forming a complex of Assumption Cathedral of the Astrakhan Kremlin, which earlier hasn’t been considered as a subject of special research. Basing on the archival sources, photographic materials, publications and on-site investigations of monuments, the creation history of the complete architectural complex sustained in one style of the Muscovite baroque, unique in its composite construction, is considered. Its interpretation in the all-Russian architectural context is offered. Typological features of single constructions come to light. The typology of the Prechistinsky bell tower has an untypical architectural solution - “hexagonal structure on octagonal and quadrangular structures”. The way of connecting the building of the Cathedral and the chambers by the passage was characteristic of monastic constructions and was exclusively seldom in kremlins, farmsteads and ensembles of city cathedrals. The composite scheme of the Assumption Cathedral includes the Lobnoye Mesto (“the Place of Execution” located on an axis from the West, it is connected with the main building by a quarter-turn with landing. The only prototype of the structure is a Lobnoye Mesto on the Red Square in Moscow. In the article the version about the emergence of the Place of Execution on the basis of earlier existing construction - a tower “the Peal” which is repeatedly mentioned in written sources in connection with S. Razin’s revolt is considered. The metropolitan Sampson, trying to keep the value of the Astrakhan metropolitanate, builds the Assumption Cathedral and the Place of Execution directly appealing to a capital prototype to emphasize the continuity and close connection with Moscow.

  14. Evaluating tidal marsh sustainability in the face of sea-level rise: a hybrid modeling approach applied to San Francisco Bay.

    Directory of Open Access Journals (Sweden)

    Diana Stralberg

    Full Text Available Tidal marshes will be threatened by increasing rates of sea-level rise (SLR over the next century. Managers seek guidance on whether existing and restored marshes will be resilient under a range of potential future conditions, and on prioritizing marsh restoration and conservation activities.Building upon established models, we developed a hybrid approach that involves a mechanistic treatment of marsh accretion dynamics and incorporates spatial variation at a scale relevant for conservation and restoration decision-making. We applied this model to San Francisco Bay, using best-available elevation data and estimates of sediment supply and organic matter accumulation developed for 15 Bay subregions. Accretion models were run over 100 years for 70 combinations of starting elevation, mineral sediment, organic matter, and SLR assumptions. Results were applied spatially to evaluate eight Bay-wide climate change scenarios.Model results indicated that under a high rate of SLR (1.65 m/century, short-term restoration of diked subtidal baylands to mid marsh elevations (-0.2 m MHHW could be achieved over the next century with sediment concentrations greater than 200 mg/L. However, suspended sediment concentrations greater than 300 mg/L would be required for 100-year mid marsh sustainability (i.e., no elevation loss. Organic matter accumulation had minimal impacts on this threshold. Bay-wide projections of marsh habitat area varied substantially, depending primarily on SLR and sediment assumptions. Across all scenarios, however, the model projected a shift in the mix of intertidal habitats, with a loss of high marsh and gains in low marsh and mudflats.Results suggest a bleak prognosis for long-term natural tidal marsh sustainability under a high-SLR scenario. To minimize marsh loss, we recommend conserving adjacent uplands for marsh migration, redistributing dredged sediment to raise elevations, and concentrating restoration efforts in sediment-rich areas

  15. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  16. Visual Analysis of Tumor Control Models for Prediction of Radiotherapy Response

    DEFF Research Database (Denmark)

    Raidou, Renata G.; Casares Magaz, Oscar; Muren, Ludvig

    2016-01-01

    on TCP modeling, to explore the information provided by their models, to discover new knowledge and to confirm or generate hypotheses within their data. Our approach incorporates the following four main components: (1) It supports the exploration of uncertainty and its effect on TCP models; (2...... impact on the modeling outcome, while the models are sensitive to a number of parameter assumptions. Currently, uncertainty and parameter sensitivity are not incorporated in the analysis, due to time and resource constraints. To this end, we propose a visual tool that enables clinical researchers working......) It facilitates parameter sensitivity anal- ysis to common assumptions; (3) It enables the identification of inter-patient response variability; (4) It allows starting the analysis from the desired treatment outcome, to identify treatment strategies that achieve it. We con- ducted an evaluation with nine clinical...

  17. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  18. Multiscale Molecular Dynamics Model for Heterogeneous Charged Systems

    Science.gov (United States)

    Stanton, L. G.; Glosli, J. N.; Murillo, M. S.

    2018-04-01

    Modeling matter across large length scales and timescales using molecular dynamics simulations poses significant challenges. These challenges are typically addressed through the use of precomputed pair potentials that depend on thermodynamic properties like temperature and density; however, many scenarios of interest involve spatiotemporal variations in these properties, and such variations can violate assumptions made in constructing these potentials, thus precluding their use. In particular, when a system is strongly heterogeneous, most of the usual simplifying assumptions (e.g., spherical potentials) do not apply. Here, we present a multiscale approach to orbital-free density functional theory molecular dynamics (OFDFT-MD) simulations that bridges atomic, interionic, and continuum length scales to allow for variations in hydrodynamic quantities in a consistent way. Our multiscale approach enables simulations on the order of micron length scales and 10's of picosecond timescales, which exceeds current OFDFT-MD simulations by many orders of magnitude. This new capability is then used to study the heterogeneous, nonequilibrium dynamics of a heated interface characteristic of an inertial-confinement-fusion capsule containing a plastic ablator near a fuel layer composed of deuterium-tritium ice. At these scales, fundamental assumptions of continuum models are explored; features such as the separation of the momentum fields among the species and strong hydrogen jetting from the plastic into the fuel region are observed, which had previously not been seen in hydrodynamic simulations.

  19. Direct numerical simulations of temporally developing hydrocarbon shear flames at elevated pressure: effects of the equation of state and the unity Lewis number assumption

    Science.gov (United States)

    Korucu, Ayse; Miller, Richard

    2016-11-01

    Direct numerical simulations (DNS) of temporally developing shear flames are used to investigate both equation of state (EOS) and unity-Lewis (Le) number assumption effects in hydrocarbon flames at elevated pressure. A reduced Kerosene / Air mechanism including a semi-global soot formation/oxidation model is used to study soot formation/oxidation processes in a temporarlly developing hydrocarbon shear flame operating at both atmospheric and elevated pressures for the cubic Peng-Robinson real fluid EOS. Results are compared to simulations using the ideal gas law (IGL). The results show that while the unity-Le number assumption with the IGL EOS under-predicts the flame temperature for all pressures, with the real fluid EOS it under-predicts the flame temperature for 1 and 35 atm and over-predicts the rest. The soot mass fraction, Ys, is only under-predicted for the 1 atm flame for both IGL and real gas fluid EOS models. While Ys is over-predicted for elevated pressures with IGL EOS, for the real gas EOS Ys's predictions are similar to results using a non-unity Le model derived from non-equilibrium thermodynamics and real diffusivities. Adopting the unity Le assumption is shown to cause misprediction of Ys, the flame temperature, and the mass fractions of CO, H and OH.

  20. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.