WorldWideScience

Sample records for modeling requires assumptions

  1. Assessing moderated mediation in linear models requires fewer confounding assumptions than assessing mediation.

    Science.gov (United States)

    Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2016-11-01

    It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up.

  2. Faulty assumptions for repository requirements

    Energy Technology Data Exchange (ETDEWEB)

    Sutcliffe, W G

    1999-06-03

    Long term performance requirements for a geologic repository for spent nuclear fuel and high-level waste are based on assumptions concerning water use and subsequent deaths from cancer due to ingesting water contaminated with radio isotopes ten thousand years in the future. This paper argues that the assumptions underlying these requirements are faulty for a number of reasons. First, in light of the inevitable technological progress, including efficient desalination of water, over the next ten thousand years, it is inconceivable that a future society would drill for water near a repository. Second, even today we would not use water without testing its purity. Third, today many types of cancer are curable, and with the rapid progress in medical technology in general, and the prevention and treatment of cancer in particular, it is improbable that cancer caused by ingesting contaminated water will be a sign&ant killer in the far future. This paper reviews the performance requirements for geological repositories and comments on the difficulties in proving compliance in the face of inherent uncertainties. The already tiny long-term risk posed by a geologic repository is presented and contrasted with contemporary every day risks. A number of examples of technological progress, including cancer treatments, are advanced. The real and significant costs resulting from the overly conservative requirements are then assessed. Examples are given of how money (and political capital) could be put to much better use to save lives today and in the future. It is concluded that although a repository represents essentially no long-term risk, monitored retrievable dry storage (above or below ground) is the current best alternative for spent fuel and high-level nuclear waste.

  3. The Effect of Violations of the Constant Demand Assumption on the Defense Logistic Agency Requirements Model

    Science.gov (United States)

    1994-09-01

    Pipeline stock is calculated by forecasting the projected requirements due to transportation time and counting out the amount of stock needed to cover...the "lead time" delay. Time needed for the transportation of an order is referred to as the lead time. "Lead time is the amount of time between the...Defenae Logistica Agency The Defense Logistics Agency (DLA) is an agency of the Department of Defense. "The National Security Act (NSA) established

  4. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  5. Evolution of Requirements and Assumptions for Future Exploration Missions

    Science.gov (United States)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  6. Deep Borehole Field Test Requirements and Controlled Assumptions.

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  7. Catalyst Deactivation: Control Relevance of Model Assumptions

    Directory of Open Access Journals (Sweden)

    Bernt Lie

    2000-10-01

    Full Text Available Two principles for describing catalyst deactivation are discussed, one based on the deactivation mechanism, the other based on the activity and catalyst age distribution. When the model is based upon activity decay, it is common to use a mean activity developed from the steady-state residence time distribution. We compare control-relevant properties of such an approach with those of a model based upon the deactivation mechanism. Using a continuous stirred tank reactor as an example, we show that the mechanistic approach and the population balance approach lead to identical models. However, common additional assumptions used for activity-based models lead to model properties that may deviate considerably from the correct one.

  8. Simplified subsurface modelling: data assimilation and violated model assumptions

    Science.gov (United States)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated

  9. Models for waste life cycle assessment: Review of technical assumptions

    DEFF Research Database (Denmark)

    Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky

    2010-01-01

    A number of waste life cycle assessment (LCA) models have been gradually developed since the early 1990s, in a number of countries, usually independently from each other. Large discrepancies in results have been observed among different waste LCA models, although it has also been shown that results......, such as the functional unit, system boundaries, waste composition and energy modelling. The modelling assumptions of waste management processes, ranging from collection, transportation, intermediate facilities, recycling, thermal treatment, biological treatment, and landfilling, are obviously critical when comparing...... waste LCA models. This review infers that some of the differences in waste LCA models are inherent to the time they were developed. It is expected that models developed later, benefit from past modelling assumptions and knowledge and issues. Models developed in different countries furthermore rely...

  10. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    CERN Document Server

    Côté, Benoit; Ritter, Christian; Herwig, Falk; Venn, Kim A

    2016-01-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of Type Ia supernovae and the strength of gal...

  11. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    Science.gov (United States)

    Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.

    2017-02-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA

  12. Assumptions behind size-based ecosystem models are realistic

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.;

    2016-01-01

    A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed...... by Froese et al. are realistic and consistent. We further show that the assumption about density-dependence being described by a stock recruitment relationship is responsible for determining whether a peak in the cohort biomass of a population occurs late or early in life. Finally, we argue...

  13. Expressing Environment Assumptions and Real-time Requirements for a Distributed Embedded System with Shared Variables

    DEFF Research Database (Denmark)

    Tjell, Simon; Fernandes, João Miguel

    2008-01-01

    In a distributed embedded system, it is often necessary to share variables among its computing nodes to allow the distribution of control algorithms. It is therefore necessary to include a component in each node that provides the service of variable sharing. For that type of component, this paper...... discusses how to create a Colored Petri Nets (CPN) model that formally expresses the following elements in a clearly separated structure: (1) assumptions about the behavior of the environment of the component, (2) real-time requirements for the component, and (3) a possible solution in terms of an algorithm...

  14. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    ’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  15. Testing Modeling Assumptions in the West Africa Ebola Outbreak

    Science.gov (United States)

    Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre

    2016-01-01

    The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance. PMID:27721505

  16. Testing Modeling Assumptions in the West Africa Ebola Outbreak

    Science.gov (United States)

    Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre

    2016-10-01

    The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.

  17. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, Rick [ICF International, Fairfax, VA (United States); Bluestein, Joel [ICF International, Fairfax, VA (United States); Rodriguez, Nick [ICF International, Fairfax, VA (United States); Knoke, Stu [ICF International, Fairfax, VA (United States)

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  18. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  19. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    Science.gov (United States)

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  20. PKreport: report generation for checking population pharmacokinetic model assumptions

    Directory of Open Access Journals (Sweden)

    Li Jun

    2011-05-01

    Full Text Available Abstract Background Graphics play an important and unique role in population pharmacokinetic (PopPK model building by exploring hidden structure among data before modeling, evaluating model fit, and validating results after modeling. Results The work described in this paper is about a new R package called PKreport, which is able to generate a collection of plots and statistics for testing model assumptions, visualizing data and diagnosing models. The metric system is utilized as the currency for communicating between data sets and the package to generate special-purpose plots. It provides ways to match output from diverse software such as NONMEM, Monolix, R nlme package, etc. The package is implemented with S4 class hierarchy, and offers an efficient way to access the output from NONMEM 7. The final reports take advantage of the web browser as user interface to manage and visualize plots. Conclusions PKreport provides 1 a flexible and efficient R class to store and retrieve NONMEM 7 output, 2 automate plots for users to visualize data and models, 3 automatically generated R scripts that are used to create the plots; 4 an archive-oriented management tool for users to store, retrieve and modify figures, 5 high-quality graphs based on the R packages, lattice and ggplot2. The general architecture, running environment and statistical methods can be readily extended with R class hierarchy. PKreport is free to download at http://cran.r-project.org/web/packages/PKreport/index.html.

  1. Validating modelling assumptions of alpha particles in electrostatic turbulence

    CERN Document Server

    Wilkie, George; Highcock, Edmund; Dorland, William

    2014-01-01

    To rigorously model fast ions in fusion plasmas, a non-Maxwellian equilibrium distribution must be used. In the work, the response of high-energy alpha particles to electrostatic turbulence has been analyzed for several different tokamak parameters. Our results are consistent with known scalings and experimental evidence that alpha particles are generally well-confined: on the order of several seconds. It is also confirmed that the effect of alphas on the turbulence is negligible at realistically low concentrations, consistent with linear theory. It is demonstrated that the usual practice of using a high-temperature Maxwellian gives incorrect estimates for the radial alpha particle flux, and a method of correcting it is provided. Furthermore, we see that the timescales associated with collisions and transport compete at moderate energies, calling into question the assumption that alpha particles remain confined to a flux surface that is used in the derivation of the slowing-down distribution.

  2. Testing the habituation assumption underlying models of parasitoid foraging behavior

    Science.gov (United States)

    Abram, Katrina; Colazza, Stefano; Peri, Ezio

    2017-01-01

    Background Habituation, a form of non-associative learning, has several well-defined characteristics that apply to a wide range of physiological and behavioral responses in many organisms. In classic patch time allocation models, habituation is considered to be a major mechanistic component of parasitoid behavioral strategies. However, parasitoid behavioral responses to host cues have not previously been tested for the known, specific characteristics of habituation. Methods In the laboratory, we tested whether the foraging behavior of the egg parasitoid Trissolcus basalis shows specific characteristics of habituation in response to consecutive encounters with patches of host (Nezara viridula) chemical contact cues (footprints), in particular: (i) a training interval-dependent decline in response intensity, and (ii) a training interval-dependent recovery of the response. Results As would be expected of a habituated response, wasps trained at higher frequencies decreased their behavioral response to host footprints more quickly and to a greater degree than those trained at low frequencies, and subsequently showed a more rapid, although partial, recovery of their behavioral response to host footprints. This putative habituation learning could not be blocked by cold anesthesia, ingestion of an ATPase inhibitor, or ingestion of a protein synthesis inhibitor. Discussion Our study provides support for the assumption that diminishing responses of parasitoids to chemical indicators of host presence constitutes habituation as opposed to sensory fatigue, and provides a preliminary basis for exploring the underlying mechanisms. PMID:28321365

  3. On the underlying assumptions of threshold Boolean networks as a model for genetic regulatory network behavior

    Science.gov (United States)

    Tran, Van; McCall, Matthew N.; McMurray, Helene R.; Almudevar, Anthony

    2013-01-01

    Boolean networks (BoN) are relatively simple and interpretable models of gene regulatory networks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks (GRN). We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN). Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled. We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions. Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles. PMID:24376454

  4. On the underlying assumptions of threshold Boolean networks as a model for genetic regulatory network behavior

    Directory of Open Access Journals (Sweden)

    Van eTran

    2013-12-01

    Full Text Available Boolean networks (BoN are relatively simple and interpretable models of gene regulatorynetworks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks.We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN. Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled.We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions.Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles.

  5. Differentiating Different Modeling Assumptions in Simulations of MagLIF loads on the Z Generator

    Science.gov (United States)

    Jennings, C. A.; Gomez, M. R.; Harding, E. C.; Knapp, P. F.; Ampleford, D. J.; Hansen, S. B.; Weis, M. R.; Glinsky, M. E.; Peterson, K.; Chittenden, J. P.

    2016-10-01

    Metal liners imploded by a fast rising (MagLIF experiments have had some success. While experiments are increasingly well diagnosed, many of the measurements (particularly during stagnation) are time integrated, limited in spatial resolution or require additional assumptions to interpret in the context of a structured, rapidly evolving system. As such, in validating MHD calculations, there is the potential for the same observables in the experimental data to be reproduced under different modeling assumptions. Using synthetic diagnostics of the results of different pre-heat, implosion and stagnation simulations run with the Gorgon MHD code, we discuss how the interpretation of typical Z diagnostics relate to more fundamental simulation parameters. We then explore the extent to which different assumptions on instability development, current delivery, high-Z mix into the fuel and initial laser deposition can be differentiated in our existing measurements. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.

  6. Modelling the dynamics of reasoning processes: reasoning by assumption

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.

    2008-01-01

    To model the dynamics of cognitive processes, often the Dynamical Systems Theory (DST) is advocated. However, for higher cognitive processes such as reasoning and certain forms of natural language processing the techniques adopted within DST are not very adequate. This paper shows how an analysis of

  7. Testing the habituation assumption underlying models of parasitoid foraging behavior

    NARCIS (Netherlands)

    Abram, Paul K.; Cusumano, Antonino; Abram, Katrina; Colazza, Stefano; Peri, Ezio

    2017-01-01

    Background. Habituation, a form of non-associative learning, has several well-defined characteristics that apply to a wide range of physiological and behavioral responses in many organisms. In classic patch time allocation models, habituation is considered to be a major mechanistic component of para

  8. Modeling assumptions influence on stress and strain state in 450 t cranes hoisting winch construction

    Directory of Open Access Journals (Sweden)

    Damian GĄSKA

    2011-01-01

    Full Text Available This work investigates the FEM simulation of stress and strain state of the selected trolley’s load-carrying structure with 450 tones hoisting capacity [1]. Computational loads were adopted as in standard PN-EN 13001-2. Model of trolley was built from several cooperating with each other (in contact parts. The influence of model assumptions (simplification in selected construction nodes to the value of maximum stress and strain with its area of occurrence was being analyzed. The aim of this study was to determine whether the simplification, which reduces the time required to prepare the model and perform calculations (e.g., rigid connection instead of contact are substantially changing the characteristics of the model.

  9. A computational model to investigate assumptions in the headturn preference procedure

    Directory of Open Access Journals (Sweden)

    Christina eBergmann

    2013-10-01

    Full Text Available In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP: (1 behavioural differences originate in different processing; (2 processing involves some form of recognition; (3 words are segmented from connected speech; and (4 differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a the specific voices used in the two parts on HPP experiments (familiarisation and test and (b the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximise cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviours observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP.

  10. Modelling sexual transmission of HIV: testing the assumptions, validating the predictions

    Science.gov (United States)

    Baggaley, Rebecca F.; Fraser, Christophe

    2010-01-01

    Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600

  11. Gaussian versus top-hat profile assumptions in integral plume models

    Science.gov (United States)

    Davidson, G. A.

    Numerous integral models describing the behaviour of buoyant plumes released into stratified crossflows have been presented in the literature. One of the differences between these models is the form assumed for the self-similar profile: some models assume a top-hat form while others assume a Gaussian. The differences between these two approaches are evaluated by (a) comparing the governing equations on which Gaussian and top-hat models are based; (b) comparing some typical plume predictions generated by each type of model over a range of model parameters. It is shown that, while the profile assumption does lead to differences in the equations which govern plume variables, the effects of these differences on actual plume predictions is small over the range of parameters of practical interest. Since the predictions of Gaussian and top-hat models are essentially equivalent, it can thus be concluded that the additional physical information incorporated into a Gaussian formulation plays only a minor role in mean plume behaviour, and that the tophat approach, which requires the numerical solution of a simpler set of equations, is adequate for most situations where an integral approach would be used.

  12. Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies

    Energy Technology Data Exchange (ETDEWEB)

    Stoll, Brady [National Renewable Energy Lab. (NREL), Golden, CO (United States); Brinkman, Gregory [National Renewable Energy Lab. (NREL), Golden, CO (United States); Townsend, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bloom, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-01

    Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0

  13. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Science.gov (United States)

    Fang, L.; Sun, X. Y.; Liu, Y. W.

    2016-12-01

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology.

  14. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    Science.gov (United States)

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  15. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...

  16. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid

  17. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid mediu

  18. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    Science.gov (United States)

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  19. Requirements engineering for cross-organizational ERP implementation undocumented assumptions and potential mismatches

    NARCIS (Netherlands)

    Daneva, Maya; Wieringa, Roel

    2005-01-01

    A key issue in requirements engineering (RE) for enterprise resource planning (ERP) in a cross-organizational context is how to find a match between the ERP application modules and requirements for business coordination. This paper proposes a conceptual framework for analyzing coordination requireme

  20. Requirements engineering for cross-organizational ERP implementation: Undocumented assumptions and potential mismatches

    NARCIS (Netherlands)

    Daneva, Maia; Wieringa, Roelf J.

    A key issue in Requirements Engineering (RE) for Enterprise Resource Planning (ERP) in a crossorganizational context is how to find a match between the ERP application modules and requirements for business coordination. This paper proposes a conceptual framework for analyzing coordination

  1. Quantum Darwinism Requires an Extra-Theoretical Assumption of Encoding Redundancy

    Science.gov (United States)

    Fields, Chris

    2010-10-01

    Observers restricted to the observation of pointer states of apparatus cannot conclusively demonstrate that the pointer of an apparatus mathcal{A} registers the state of a system of interest S without perturbing S. Observers cannot, therefore, conclusively demonstrate that the states of a system S are redundantly encoded by pointer states of multiple independent apparatus without destroying the redundancy of encoding. The redundancy of encoding required by quantum Darwinism must, therefore, be assumed from outside the quantum-mechanical formalism and without the possibility of experimental demonstration.

  2. Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions

    Science.gov (United States)

    Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.

    2015-01-01

    Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.

  3. Comparing the Performance of Approaches for Testing the Homogeneity of Variance Assumption in One-Factor ANOVA Models

    Science.gov (United States)

    Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.

    2017-01-01

    Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…

  4. Single family heating and cooling requirements: Assumptions, methods, and summary results

    Energy Technology Data Exchange (ETDEWEB)

    Ritschard, R.L.; Hanford, J.W.; Sezgen, A.O. (Lawrence Berkeley Lab., CA (United States))

    1992-03-01

    The research has created a data base of hourly building loads using a state-of-the-art building simulation code (DOE-2.ID) for 8 prototypes, representing pre-1940s to 1990s building practices, in 16 US climates. The report describes the assumed modeling inputs and building operations, defines the building prototypes and selection of base cities, compares the simulation results to both surveyed and measured data sources, and discusses the results. The full data base with hourly space conditioning, water heating, and non-HVAC electricity consumption is available from GRI. In addition, the estimated loads on a per square foot basis are included as well as the peak heating and cooling loads.

  5. IRT models with relaxed assumptions in eRm: A manual-like instruction

    Directory of Open Access Journals (Sweden)

    REINHOLD HATZINGER

    2009-03-01

    Full Text Available Linear logistic models with relaxed assumptions (LLRA as introduced by Fischer (1974 are a flexible tool for the measurement of change for dichotomous or polytomous responses. As opposed to the Rasch model, assumptions on dimensionality of items, their mutual dependencies and the distribution of the latent trait in the population of subjects are relaxed. Conditional maximum likelihood estimation allows for inference about treatment, covariate or trend effect parameters without taking the subjects' latent trait values into account. In this paper we will show how LLRAs based on the LLTM, LRSM and LPCM can be used to answer various questions about the measurement of change and how they can be fitted in R using the eRm package. A number of small didactic examples is provided that can easily be used as templates for real data sets. All datafiles used in this paper are available from http://eRm.R-Forge.R-project.org/

  6. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions.

    Science.gov (United States)

    Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant was compared for a series of model assumptions. Three different model approaches describing BNR are considered. In the reference case, the original model implementations are used to simulate WWTP1 (ASM1 & 3) and WWTP2 (ASM2d). The second set of models includes a reactive settler, which extends the description of the non-reactive TSS sedimentation and transport in the reference case with the full set of ASM processes. Finally, the third set of models is based on including electron acceptor dependency of biomass decay rates for ASM1 (WWTP1) and ASM2d (WWTP2). The results show that incorporation of a reactive settler: (1) increases the hydrolysis of particulates; (2) increases the overall plant's denitrification efficiency by reducing the S(NOx) concentration at the bottom of the clarifier; (3) increases the oxidation of COD compounds; (4) increases X(OHO) and X(ANO) decay; and, finally, (5) increases the growth of X(PAO) and formation of X(PHA,Stor) for ASM2d, which has a major impact on the whole P removal system. Introduction of electron acceptor dependent decay leads to a substantial increase of the concentration of X(ANO), X(OHO) and X(PAO) in the bottom of the clarifier. The paper ends with a critical discussion of the influence of the different model assumptions, and emphasizes the need for a model user to understand the significant differences in simulation results that are obtained when applying different combinations of 'standard' models.

  7. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    Science.gov (United States)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  8. Condition for Energy Efficient Watermarking with Random Vector Model without WSS Assumption

    CERN Document Server

    Yan, Bin; Guo, Yinjing

    2009-01-01

    Energy efficient watermarking preserves the watermark energy after linear attack as much as possible. We consider in this letter non-stationary signal models and derive conditions for energy efficient watermarking under random vector model without WSS assumption. We find that the covariance matrix of the energy efficient watermark should be proportional to host covariance matrix to best resist the optimal linear removal attacks. In WSS process our result reduces to the well known power spectrum condition. Intuitive geometric interpretation of the results are also discussed which in turn also provide more simpler proof of the main results.

  9. Meso-scale modeling: beyond local equilibrium assumption for multiphase flow

    CERN Document Server

    Wang, Wei

    2015-01-01

    This is a summary of the article with the same title, accepted for publication in Advances in Chemical Engineering, 47: 193-277 (2015). Gas-solid fluidization is a typical nonlinear nonequilibrium system with multiscale structure. In particular, the mesoscale structure in terms of bubbles or clusters, which can be characterized by nonequilibrium features in terms of bimodal velocity distribution, energy non equipartition, and correlated density fluctuations, is the critical factor. Traditional two-fluid model (TFM) and relevant closures depend on local equilibrium and homogeneous distribution assumptions, and fail to predict the dynamic, nonequilibrium phenomena in circulating fluidized beds even with fine-grid resolution. In contrast, the mesoscale modeling, as exemplified by the energy-minimization multiscale (EMMS) model, is consistent with the nonequilibrium features in multiphase flows. Thus, the structure-dependent multi-fluid model conservation equations with the EMMS-based mesoscale modeling greatly i...

  10. A rigid thorax assumption affects model loading predictions at the upper but not lower lumbar levels.

    Science.gov (United States)

    Ignasiak, Dominika; Ferguson, Stephen J; Arjmand, Navid

    2016-09-06

    A number of musculoskeletal models of the human spine have been used for predictions of lumbar and muscle forces. However, the predictive power of these models might be limited by a commonly made assumption; thoracic region is represented as a single lumped rigid body. This study hence aims to investigate the impact of such assumption on the predictions of spinal and muscle forces. A validated thoracolumbar spine model was used with a flexible thorax (T1-T12), a completely rigid one or rigid with thoracic posture updated at each analysis step. The simulations of isometric forward flexion up to 80°, with and without a 20kg hand load, were performed, based on the previously measured kinematics. Depending on the simulated task, the rigid model predicted slightly or moderately lower compressive loading than the flexible one. The differences were relatively greater at the upper lumbar levels (average underestimation of 14% at the T12L1 for flexion tasks and of 18% for flexion tasks with hand load) as compared to the lower levels (3% and 8% at the L5S1 for unloaded and loaded tasks, respectively). The rigid model with updated thoracic posture predicted compressive forces similar to those of the rigid model. Predicted muscle forces were, however, very different between the three models. This study indicates that the lumbar spine models with a rigid thorax definition can be used for loading investigations at the lowermost spinal levels. For predictions of upper lumbar spine loading, using models with an articulated thorax is advised. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Evaluation of Horizontal Electric Field Under Different Lightning Current Models by Perfect Ground Assumption

    Institute of Scientific and Technical Information of China (English)

    LIANG Jianfeng; LI Yanming

    2012-01-01

    Lightning electromagnetics can affect the reliability of the power system or communication system.Therefore,evaluation of electromagnetic fields generated by lightning return stroke is indispensable.Arnold sommerfeld proposed a model to calculate the electromagnetic field,but it involved the time-consuming sommerfeld integral.However,perfect conductor ground assumption can account for fast calculation,thus this paper reviews the perfect ground equation for evaluation of lightning electromagnetic fields,presents three engineering lightning return stroke models,and calculates the horizontal electric field caused by three lightning return stroke models.According to the results,the amplitude of lightning return stroke has a strong impact on horizontal electric fields,and the steepness of lightning return stroke influences the horizontal electric fields.Moreover,the perfect ground method is faster than the sommerfeld integral method.

  12. Continuous-discrete model of parasite-host system dynamics: Trigger regime at simplest assumptions

    Directory of Open Access Journals (Sweden)

    L. V. Nedorezov

    2014-09-01

    Full Text Available In paper continuous-discrete model of parasite-host system dynamics is analyzed. Within the framework of model it is assumed that appearance of individuals of new generations of both populations is realized at fixed time moments tk=hk, t0=0, k=1,2,... , h=const>0; it means that several processes are compressed together: producing of eggs by hosts, attack of eggs by parasites (with respective transformation of host's eggs into parasite's eggs, staying of hosts and parasites in phase "egg", and appearance of new individuals. It is also assumed that death process of individuals has a continuous nature, but developments of both populations are realized independently between fixed time moments. Dynamic regimes of model are analyzed. In particular, it was obtained that with simplest assumptions about birth process in host population and numbers of attacked hosts regime with two non-trivial stable attractors in phase space of system can be realized.

  13. Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption

    Directory of Open Access Journals (Sweden)

    Zheping Yan

    2014-01-01

    Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.

  14. Sensitivity to imputation models and assumptions in receiver operating characteristic analysis with incomplete data.

    Science.gov (United States)

    Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M

    Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms.

  15. Testing agile requirements models

    Institute of Scientific and Technical Information of China (English)

    BOTASCHANJAN Jewgenij; PISTER Markus; RUMPE Bernhard

    2004-01-01

    This paper discusses a model-based approach to validate software requirements in agile development processes by simulation and in particular automated testing. The use of models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development beginning already early in the requirements definition phase. Testing requirements are some of the most important techniques to give feedback and to increase the quality of the result. Therefore testing of artifacts should be introduced as early as possible, even in the requirements definition phase.

  16. NONLINEAR MODELS FOR DESCRIPTION OF CACAO FRUIT GROWTH WITH ASSUMPTION VIOLATIONS

    Directory of Open Access Journals (Sweden)

    JOEL AUGUSTO MUNIZ

    2017-01-01

    Full Text Available Cacao (Theobroma cacao L. is an important fruit in the Brazilian economy, which is mainly cultivated in the southern State of Bahia. The optimal stage for harvesting is a major factor for fruit quality and the knowledge on its growth curves can help, especially in identifying the ideal maturation stage for harvesting. Nonlinear regression models have been widely used for description of growth curves. However, several studies in this subject do not consider the residual analysis, the existence of a possible dependence between longitudinal observations, or the sample variance heterogeneity, compromising the modeling quality. The objective of this work was to compare the fit of nonlinear regression models, considering residual analysis and assumption violations, in the description of the cacao (clone Sial-105 fruit growth. The data evaluated were extracted from Brito and Silva (1983, who conducted the experiment in the Cacao Research Center, Ilheus, State of Bahia. The variables fruit length, diameter and volume as a function of fruit age were studied. The use of weighting and incorporation of residual dependencies was efficient, since the modeling became more consistent, improving the model fit. Considering the first-order autoregressive structure, when needed, leads to significant reduction in the residual standard deviation, making the estimates more reliable. The Logistic model was the most efficient for the description of the cacao fruit growth.

  17. Relaxing the closure assumption in single-season occupancy models: staggered arrival and departure times

    Science.gov (United States)

    Kendall, William L.; Hines, James E.; Nichols, James D.; Grant, Evan H. Campbell

    2013-01-01

    Occupancy statistical models that account for imperfect detection have proved very useful in several areas of ecology, including species distribution and spatial dynamics, disease ecology, and ecological responses to climate change. These models are based on the collection of multiple samples at each of a number of sites within a given season, during which it is assumed the species is either absent or present and available for detection while each sample is taken. However, for some species, individuals are only present or available for detection seasonally. We present a statistical model that relaxes the closure assumption within a season by permitting staggered entry and exit times for the species of interest at each site. Based on simulation, our open model eliminates bias in occupancy estimators and in some cases increases precision. The power to detect the violation of closure is high if detection probability is reasonably high. In addition to providing more robust estimation of occupancy, this model permits comparison of phenology across sites, species, or years, by modeling variation in arrival or departure probabilities. In a comparison of four species of amphibians in Maryland we found that two toad species arrived at breeding sites later in the season than a salamander and frog species, and departed from sites earlier.

  18. Finding the right fit: A comparison of process assumptions underlying popular drift-diffusion models.

    Science.gov (United States)

    Ashby, Nathaniel J S; Jekel, Marc; Dickert, Stephan; Glöckner, Andreas

    2016-12-01

    Recent research makes increasing use of eye-tracking methodologies to generate and test process models. Overall, such research suggests that attention, generally indexed by fixations (gaze duration), plays a critical role in the construction of preference, although the methods used to support this supposition differ substantially. In 2 studies we empirically test prototypical versions of prominent processing assumptions against 1 another and several base models. We find that general evidence accumulation processes provide a good fit to the data. An accumulation process that assumes leakage and temporal variability in evidence weighting (i.e., a primacy effect) fits the aggregate data, both in terms of choices and decision times, and does so across varying types of choices (e.g., charitable giving and hedonic consumption) and numbers of options well. However, when comparing models on the level of the individual, for a majority of participants simpler models capture choice data better. The theoretical and practical implications of these findings are discussed. (PsycINFO Database Record

  19. Evaluating assumptions and parameterization underlying process-based ecosystem models: the case of LPJ-GUESS

    Science.gov (United States)

    Pappas, C.; Fatichi, S.; Leuzinger, S.; Burlando, P.

    2012-04-01

    Dynamic vegetation models have been widely used for analyzing ecosystem dynamics and climate feedbacks. Their performance has been tested extensively against observations and by model intercomparison studies. In the present study, the LPJ-GUESS state-of-the-art ecosystem model was evaluated with respect to its structure, hypothesis, and parameterization by performing a global sensitivity analysis (GSA). The study aims at examining potential model limitations, particularly with regards to regional and watershed scale applications. A detailed GSA based on variance decomposition is presented to investigate the structural assumptions of the model and to highlight processes and parameters that cause the highest variability in the outputs. First order and total sensitivity indexes were calculated for each of the parameters using Sobol's methodology. In order to elucidate the role of climate on model sensitivity synthetic climate scenarios were generated based on climatic data from Switzerland. The results clearly indicate a very high sensitivity of LPJ-GUESS to photosynthetic parameters. Intrinsic quantum efficiency alone is able to explain about 60% of the variability in vegetation carbon fluxes and pools for most of the investigated climate conditions. Processes related to light were also found important together with parameters affecting plant structure (growth, establishment and mortality). The model shows minor sensitivity to hydrological and soil texture parameters, questioning its skills in representing spatial vegetation heterogeneity at regional or watershed scales. We conclude that LPJ-GUESS' structure and possibly the one of other, structurally similar, dynamic vegetation models may need to be reconsidered. Specifically, the oversensitivity of the photosynthetic component deserves a particular attention, as this seems to contradict an increasing number of observations suggesting that photosynthesis may be a consequence rather than the driver of plant growth.

  20. The biosphere at Laxemar. Data, assumptions and models used in the SR-Can assessment

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern (eds.)

    2006-10-15

    This is essentially a compilation of a variety of reports concerning the site investigations, the research activities and information derived from other sources important for the safety assessment. The main objective is to present prerequisites, methods and data used, in the biosphere modelling for the safety assessment SR-Can at the Laxemar site. A major part of the report focuses on how site-specific data are used, recalculated or modified in order to be applicable in the safety assessment context; and the methods and sub-models that are the basis for the biosphere modelling. Furthermore, the assumptions made as to the future states of surface ecosystems are mainly presented in this report. A similar report is provided for the Forsmark area. This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. A central tool in the work is the description of the topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary in collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis.

  1. The biosphere at Forsmark. Data, assumptions and models used in the SR-Can assessment

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern (eds.)

    2006-10-15

    This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. The parameters are topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary, e.g. collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis. The report presents descriptions and estimates not presented elsewhere, as well as summaries of important steps in the biosphere modelling that are presented in more detail in separate reports. The intention is to give the reader a coherent description of the steps taken to calculate doses to biota and humans, including a description of the data used, the rationale for a number of assumptions made during parameterisation, and of how the landscape context is applied in the modelling, and also to present the models used and the results obtained.

  2. Estimating ETAS: the effects of truncation, missing data, and model assumptions

    Science.gov (United States)

    Seif, Stefanie; Mignan, Arnaud; Zechar, Jeremy; Werner, Maximilian; Wiemer, Stefan

    2016-04-01

    The Epidemic-Type Aftershock Sequence (ETAS) model is widely used to describe the occurrence of earthquakes in space and time, but there has been little discussion of the limits of, and influences on, its estimation. What has been established is that ETAS parameter estimates are influenced by missing data (e.g., earthquakes are not reliably detected during lively aftershock sequences) and by simplifying assumptions (e.g., that aftershocks are isotropically distributed). In this article, we investigate the effect of truncation: how do parameter estimates depend on the cut-off magnitude, Mcut, above which parameters are estimated? We analyze catalogs from southern California and Italy and find that parameter variations as a function of Mcut are caused by (i) changing sample size (which affects e.g. Omori's cconstant) or (ii) an intrinsic dependence on Mcut (as Mcut increases, absolute productivity and background rate decrease). We also explore the influence of another form of truncation - the finite catalog length - that can bias estimators of the branching ratio. Being also a function of Omori's p-value, the true branching ratio is underestimated by 45% to 5% for 1.05ETAS productivity parameters (α and K0) and the Omoris c-value are significantly changed only for low Mcut=2.5. We further find that conventional estimation errors for these parameters, inferred from simulations that do not account for aftershock incompleteness, are underestimated by, on average, a factor of six.

  3. TESTING THE ASSUMPTIONS AND INTERPRETING THE RESULTS OF THE RASCH MODEL USING LOG-LINEAR PROCEDURES IN SPSS

    NARCIS (Netherlands)

    TENVERGERT, E; GILLESPIE, M; KINGMA, J

    This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total

  4. TESTING THE ASSUMPTIONS AND INTERPRETING THE RESULTS OF THE RASCH MODEL USING LOG-LINEAR PROCEDURES IN SPSS

    NARCIS (Netherlands)

    TENVERGERT, E; GILLESPIE, M; KINGMA, J

    1993-01-01

    This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total score

  5. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2012-07-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1 (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. However, sensitivity studies show a large range (150–346 Tg CH4 yr−1 in predicted global methane emissions (excluding emissions from rice paddies. The large range is

  6. Sensitivity of wetland methane emissions to model assumptions: application and model testing against site observations

    Directory of Open Access Journals (Sweden)

    L. Meng

    2011-06-01

    Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources is still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model because there are large differences between simulated fractional inundation and satellite observations. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1, and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78 % of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH4 yr−1. We expect this latter number may be an underestimate due to the low high-latitude inundated area captured by satellites and unrealistically low high-latitude productivity and soil carbon predicted by CLM4. Sensitivity analysis showed a large range (150–346 Tg CH4 yr−1 in

  7. Fundamental Physics and Model Assumptions in Turbulent Combustion Models for Aerospace Propulsion

    Science.gov (United States)

    2014-06-01

    Astronautics also speculate that, for non-equilibrium flows, this effect could be even stronger. Combustion problems wherein the energy deposition often...flamelet regime. However, in the presence of slow reactions such as pyrolysis and/or at high Reynolds numbers that lead to smaller turbulent scales...376404. 20S. Menon and N. Patel. Subgrid Modeling for Simulation of Spray Combustion in Large-Scale Combustors. AIAA Journal, 44(4):709–723, 2006. 21M

  8. Influence of road network and population demand assumptions in evacuation modeling for distant tsunamis

    Science.gov (United States)

    Henry, Kevin; Wood, Nathan J.; Frazier, Tim G.

    2017-01-01

    Tsunami evacuation planning in coastal communities is typically focused on local events where at-risk individuals must move on foot in a matter of minutes to safety. Less attention has been placed on distant tsunamis, where evacuations unfold over several hours, are often dominated by vehicle use and are managed by public safety officials. Traditional traffic simulation models focus on estimating clearance times but often overlook the influence of varying population demand, alternative modes, background traffic, shadow evacuation, and traffic management alternatives. These factors are especially important for island communities with limited egress options to safety. We use the coastal community of Balboa Island, California (USA), as a case study to explore the range of potential clearance times prior to wave arrival for a distant tsunami scenario. We use a first-in–first-out queuing simulation environment to estimate variations in clearance times, given varying assumptions of the evacuating population (demand) and the road network over which they evacuate (supply). Results suggest clearance times are less than wave arrival times for a distant tsunami, except when we assume maximum vehicle usage for residents, employees, and tourists for a weekend scenario. A two-lane bridge to the mainland was the primary traffic bottleneck, thereby minimizing the effect of departure times, shadow evacuations, background traffic, boat-based evacuations, and traffic light timing on overall community clearance time. Reducing vehicular demand generally reduced clearance time, whereas improvements to road capacity had mixed results. Finally, failure to recognize non-residential employee and tourist populations in the vehicle demand substantially underestimated clearance time.

  9. Matrix Diffusion for Performance Assessment - Experimental Evidence, Modelling Assumptions and Open Issues

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, A

    2004-07-01

    In this report a comprehensive overview on the matrix diffusion of solutes in fractured crystalline rocks is presented. Some examples from observations in crystalline bedrock are used to illustrate that matrix diffusion indeed acts on various length scales. Fickian diffusion is discussed in detail followed by some considerations on rock porosity. Due to the fact that the dual-porosity medium model is a very common and versatile method for describing solute transport in fractured porous media, the transport equations and the fundamental assumptions, approximations and simplifications are discussed in detail. There is a variety of geometrical aspects, processes and events which could influence matrix diffusion. The most important of these, such as, e.g., the effect of the flow-wetted fracture surface, channelling and the limited extent of the porous rock for matrix diffusion etc., are addressed. In a further section open issues and unresolved problems related to matrix diffusion are mentioned. Since matrix diffusion is one of the key retarding processes in geosphere transport of dissolved radionuclide species, matrix diffusion was consequently taken into account in past performance assessments of radioactive waste repositories in crystalline host rocks. Some issues regarding matrix diffusion are site-specific while others are independent of the specific situation of a planned repository for radioactive wastes. Eight different performance assessments from Finland, Sweden and Switzerland were considered with the aim of finding out how matrix diffusion was addressed, and whether a consistent picture emerges regarding the varying methodology of the different radioactive waste organisations. In the final section of the report some conclusions are drawn and an outlook is given. An extensive bibliography provides the reader with the key papers and reports related to matrix diffusion. (author)

  10. Academic Achievement and Behavioral Health among Asian American and African American Adolescents: Testing the Model Minority and Inferior Minority Assumptions

    Science.gov (United States)

    Whaley, Arthur L.; Noel, La Tonya

    2013-01-01

    The present study tested the model minority and inferior minority assumptions by examining the relationship between academic performance and measures of behavioral health in a subsample of 3,008 (22%) participants in a nationally representative, multicultural sample of 13,601 students in the 2001 Youth Risk Behavioral Survey, comparing Asian…

  11. Assumptions of Multiple Regression: Correcting Two Misconceptions

    Directory of Open Access Journals (Sweden)

    Matt N. Williams

    2013-09-01

    Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.

  12. From requirements to Java in a snap model-driven requirements engineering in practice

    CERN Document Server

    Smialek, Michal

    2015-01-01

    This book provides a coherent methodology for Model-Driven Requirements Engineering which stresses the systematic treatment of requirements within the realm of modelling and model transformations. The underlying basic assumption is that detailed requirements models are used as first-class artefacts playing a direct role in constructing software. To this end, the book presents the Requirements Specification Language (RSL) that allows precision and formality, which eventually permits automation of the process of turning requirements into a working system by applying model transformations and co

  13. Light weight design of highly loaded components requires matching load assumptions; Leichtbau hoch beanspruchter Bauteile erfordert angepasste rechnerische Bauteilbemessung

    Energy Technology Data Exchange (ETDEWEB)

    Heinrietz, A.; Ehl, O. [Fraunhofer-Institut fuer Betriebsfestigkeit (LBF), Darmstadt (Germany); Hasselberg, P. [Volvo Truck Corporation (Sweden); Hamm, C.E. [Alfred-Wegener-Institut fuer Polar- und Meeresforschung, Bremerhaven (Germany). Bereich Plankton Biomechanik und Marine Bionik

    2007-07-01

    The aim of safe light weight design requires increasing efforts for component design in the design process. A significant change of the component's topology in order to reach the aim of light weight design may lead to unsafe components, taking customer use into account. The component's light weight shape may become more sensitive to varying operational loads or to varying adjacent components. Computational methods can support the way to light weight design in an efficient way. (orig.)

  14. Microwave Properties of Ice-Phase Hydrometeors for Radar and Radiometers: Sensitivity to Model Assumptions

    Science.gov (United States)

    Johnson, Benjamin T.; Petty, Grant W.; Skofronick-Jackson, Gail

    2012-01-01

    A simplied framework is presented for assessing the qualitative sensitivities of computed microwave properties, satellite brightness temperatures, and radar reflectivities to assumptions concerning the physical properties of ice-phase hydrometeors. Properties considered included the shape parameter of a gamma size distribution andthe melted-equivalent mass median diameter D0, the particle density, dielectric mixing formula, and the choice of complex index of refraction for ice. We examine these properties at selected radiometer frequencies of 18.7, 36.5, 89.0, and 150.0 GHz; and radar frequencies at 2.8, 13.4, 35.6, and 94.0 GHz consistent with existing and planned remote sensing instruments. Passive and active microwave observables of ice particles arefound to be extremely sensitive to the melted-equivalent mass median diameter D0 ofthe size distribution. Similar large sensitivities are found for variations in the ice vol-ume fraction whenever the geometric mass median diameter exceeds approximately 1/8th of the wavelength. At 94 GHz the two-way path integrated attenuation is potentially large for dense compact particles. The distribution parameter mu has a relatively weak effect on any observable: less than 1-2 K in brightness temperature and up to 2.7 dB difference in the effective radar reflectivity. Reversal of the roles of ice and air in the MaxwellGarnett dielectric mixing formula leads to a signicant change in both microwave brightness temperature (10 K) and radar reflectivity (2 dB). The choice of Warren (1984) or Warren and Brandt (2008) for the complex index of refraction of ice can produce a 3%-4% change in the brightness temperature depression.

  15. A Memory-Based Model of Posttraumatic Stress Disorder: Evaluating Basic Assumptions Underlying the PTSD Diagnosis

    Science.gov (United States)

    Rubin, David C.; Berntsen, Dorthe; Bohni, Malene Klindt

    2008-01-01

    In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., text rev.; American Psychiatric Association,…

  16. A Memory-Based Model of Posttraumatic Stress Disorder: Evaluating Basic Assumptions Underlying the PTSD Diagnosis

    Science.gov (United States)

    Rubin, David C.; Berntsen, Dorthe; Bohni, Malene Klindt

    2008-01-01

    In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., text rev.; American Psychiatric Association,…

  17. A note on the translation of conceptual data models into description logics: disjointness and covering assumptions

    CSIR Research Space (South Africa)

    Casini, G

    2012-10-01

    Full Text Available possibilities for conceptual data modeling. It also raises the question of how existing conceptual models using ER, UML or ORM could be translated into Description Logics (DLs), a family of logics that have proved to be particularly appropriate for formalizing...

  18. Classic models of population dynamics: assumptions about selfregulative mechanisms and numbers of interactions between individuals

    OpenAIRE

    L. V. Nedorezov

    2014-01-01

    Stochastic model of migrations of individuals within the limits of finite domain on a plane is considered. It is assumed that population size scale is homogeneous, and there doesn't exist an interval of optimal values of population size (Alley effect doesn't realize for population). For every fixed value of population size number of interactions between individuals is calculated (as average in space and time). Correspondence between several classic models and numbers of interactions between i...

  19. Classic models of population dynamics: assumptions about selfregulative mechanisms and numbers of interactions between individuals

    Directory of Open Access Journals (Sweden)

    L.V. Nedorezov

    2014-09-01

    Full Text Available Stochastic model of migrations of individuals within the limits of finite domain on a plane is considered. It is assumed that population size scale is homogeneous, and there doesn't exist an interval of optimal values of population size (Alley effect doesn't realize for population. For every fixed value of population size number of interactions between individuals is calculated (as average in space and time. Correspondence between several classic models and numbers of interactions between individuals is analyzed.

  20. Estimation of trunk mechanical properties using system identification: effects of experimental setup and modelling assumptions.

    Science.gov (United States)

    Bazrgari, Babak; Nussbaum, Maury A; Madigan, Michael L

    2012-01-01

    The use of system identification to quantify trunk mechanical properties is growing in biomechanics research. The effects of several experimental and modelling factors involved in the system identification of trunk mechanical properties were investigated. Trunk kinematics and kinetics were measured in six individuals when exposed to sudden trunk perturbations. Effects of motion sensor positioning and properties of elements between the perturbing device and the trunk were investigated by adopting different models for system identification. Results showed that by measuring trunk kinematics at a location other than the trunk surface, the deformation of soft tissues is erroneously included into trunk kinematics and results in the trunk being predicted as a more damped structure. Results also showed that including elements between the trunk and the perturbing device in the system identification model did not substantially alter model predictions. Other important parameters that were found to substantially affect predictions were the cut-off frequency used when low-pass filtering raw data and the data window length used to estimate trunk properties.

  1. Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.

    Science.gov (United States)

    1984-03-01

    Martinengo (1980) extends a model by Dornbusch (1976) in which market equilibrium is formalized in terms of interest rates, level of prices, public...55-65. Dornbusch , R., "The Theory of Flexible Exchange Rate Regimes and Macroeconomic Policy", Scandinavian Journal of Economics, 78, 1976, pP. 255

  2. Probabilistic choice models in health-state valuation research : background, theories, assumptions and applications

    NARCIS (Netherlands)

    Arons, Alexander M M; Krabbe, Paul F M

    2013-01-01

    Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the ar

  3. Probabilistic choice models in health-state valuation research: background, theories, assumptions and applications

    NARCIS (Netherlands)

    Arons, A.M.M.; Krabbe, P.F.M.

    2013-01-01

    Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the ar

  4. Oceanographic and behavioural assumptions in models of the fate of coral and coral reef fish larvae.

    Science.gov (United States)

    Wolanski, Eric; Kingsford, Michael J

    2014-09-06

    A predictive model of the fate of coral reef fish larvae in a reef system is proposed that combines the oceanographic processes of advection and turbulent diffusion with the biological process of horizontal swimming controlled by olfactory and auditory cues within the timescales of larval development. In the model, auditory cues resulted in swimming towards the reefs when within hearing distance of the reef, whereas olfactory cues resulted in the larvae swimming towards the natal reef in open waters by swimming against the concentration gradients in the smell plume emanating from the natal reef. The model suggested that the self-seeding rate may be quite large, at least 20% for the larvae of rapidly developing reef fish species, which contrasted with a self-seeding rate less than 2% for non-swimming coral larvae. The predicted self-recruitment rate of reefs was sensitive to a number of parameters, such as the time at which the fish larvae reach post-flexion, the pelagic larval duration of the larvae, the horizontal turbulent diffusion coefficient in reefal waters and the horizontal swimming behaviour of the fish larvae in response to auditory and olfactory cues, for which better field data are needed. Thus, the model suggested that high self-seeding rates for reef fish are possible, even in areas where the 'sticky water' effect is minimal and in the absence of long-term trapping in oceanic fronts and/or large-scale oceanic eddies or filaments that are often argued to facilitate the return of the larvae after long periods of drifting at sea. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. Assessing Model Assumptions for Turbulent Premixed Combustion at High Karlovitz Number

    Science.gov (United States)

    2015-09-03

    flames in the high-Karlovitz regime are characterized and modeled using Direct Numerical Simulations ( DNS ) with detailed chemistry. To enable the present...Simulations, detailed chemistry 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON...information. 15. SUBJECT TERMS. Key words or phrases identifying major concepts in the report. 16. SECURITY CLASSIFICATION. Enter security classification

  6. Adhesion Detection Analysis by Modeling Rail Wheel Set Dynamics under the Assumption of Constant Creep Coefficient

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali Soomro

    2014-12-01

    Full Text Available Adhesion level control is very necessary to avoid slippage of rail wheelset and track from derailment for smoothing running of rail vehicle. In this paper the proper dynamics of wheelset for velocities acting in three dimensions of wheelset and rail track has been discussed along with creep forces on each wheel in longitudinal, lateral and spin directions has been enumerated and computed for suitable modeling. The concerned results have been simulated by Matlab code to observe the correlation of this phenomenon to compare creepage and creep forces for detecting adhesion level. This adhesion identification is recognized by applying coulomb’s law for sliding friction by comparing tangential and normal forces through co-efficient of friction

  7. The stochastic quasi-steady-state assumption: Reducing the model but not the noise

    Science.gov (United States)

    Srivastava, Rishi; Haseltine, Eric L.; Mastny, Ethan; Rawlings, James B.

    2011-04-01

    Highly reactive species at small copy numbers play an important role in many biological reaction networks. We have described previously how these species can be removed from reaction networks using stochastic quasi-steady-state singular perturbation analysis (sQSPA). In this paper we apply sQSPA to three published biological models: the pap operon regulation, a biochemical oscillator, and an intracellular viral infection. These examples demonstrate three different potential benefits of sQSPA. First, rare state probabilities can be accurately estimated from simulation. Second, the method typically results in fewer and better scaled parameters that can be more readily estimated from experiments. Finally, the simulation time can be significantly reduced without sacrificing the accuracy of the solution.

  8. Ethnic identity, identity coherence, and psychological functioning: testing basic assumptions of the developmental model.

    Science.gov (United States)

    Syed, Moin; Juang, Linda P

    2014-04-01

    The purpose of the present study was to test three fundamental theoretical propositions from Phinney's (1990) developmental model about the relations among ethnic identity, identity coherence, and psychological functioning: (a) ethnic identity is more strongly related to identity coherence for ethnic minorities than for Whites; (b) ethnic identity is more strongly related to psychological functioning for ethnic minorities than for Whites; and (c) identity coherence mediates the association between ethnic identity and psychological functioning for ethnic minorities, but not for Whites. These hypotheses were tested in three independent samples of ethnically diverse youth. In general, we found weak to moderate support for these three hypotheses, suggesting that the theoretically proposed differences in ethnic identity between ethnic minorities and Whites may not be supported by data. Implications for theory and measurement of ethnic identity are discussed.

  9. On the Empirical Importance of the Conditional Skewness Assumption in Modelling the Relationship between Risk and Return

    Science.gov (United States)

    Pipień, M.

    2008-09-01

    We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.

  10. Estimating ETAS: The effects of truncation, missing data, and model assumptions

    Science.gov (United States)

    Seif, Stefanie; Mignan, Arnaud; Zechar, Jeremy Douglas; Werner, Maximilian Jonas; Wiemer, Stefan

    2017-01-01

    The Epidemic-Type Aftershock Sequence (ETAS) model is widely used to describe the occurrence of earthquakes in space and time, but there has been little discussion dedicated to the limits of, and influences on, its estimation. Among the possible influences we emphasize in this article the effect of the cutoff magnitude, Mcut, above which parameters are estimated; the finite length of earthquake catalogs; and missing data (e.g., during lively aftershock sequences). We analyze catalogs from Southern California and Italy and find that some parameters vary as a function of Mcut due to changing sample size (which affects, e.g., Omori's c constant) or an intrinsic dependence on Mcut (as Mcut increases, absolute productivity and background rate decrease). We also explore the influence of another form of truncation—the finite catalog length—that can bias estimators of the branching ratio. Being also a function of Omori's p value, the true branching ratio is underestimated by 45% to 5% for 1.05 < p < 1.2. Finite sample size affects the variation of the branching ratio estimates. Moreover, we investigate the effect of missing aftershocks and find that the ETAS productivity parameters (α and K0) and the Omori's c and p values are significantly changed for Mcut < 3.5. We further find that conventional estimation errors for these parameters, inferred from simulations that do not account for aftershock incompleteness, are underestimated by, on average, a factor of 8.

  11. Flawed Assumptions, Models and Decision Making: Misconceptions Concerning Human Elements in Complex System

    Energy Technology Data Exchange (ETDEWEB)

    FORSYTHE,JAMES C.; WENNER,CAREN A.

    1999-11-03

    The history of high consequence accidents is rich with events wherein the actions, or inaction, of humans was critical to the sequence of events preceding the accident. Moreover, it has been reported that human error may contribute to 80% of accidents, if not more (dougherty and Fragola, 1988). Within the safety community, this reality is widely recognized and there is a substantially greater awareness of the human contribution to system safety today than has ever existed in the past. Despite these facts, and some measurable reduction in accident rates, when accidents do occur, there is a common lament. No matter how hard we try, we continue to have accidents. Accompanying this lament, there is often bewilderment expressed in statements such as, ''There's no explanation for why he/she did what they did''. It is believed that these statements are a symptom of inadequacies in how they think about humans and their role within technological systems. In particular, while there has never been a greater awareness of human factors, conceptual models of human involvement in engineered systems are often incomplete and in some cases, inaccurate.

  12. Estimates of Late Pleistocene Runoff in Estancia Drainage Basin, Central New Mexico: Climate Assumptions vs. Model Results

    Science.gov (United States)

    Menking, K. M.; Anderson, R. Y.; Syed, K. H.; Shafike, N. G.

    2002-12-01

    The climatic conditions leading to highstands of "pluvial" Lake Estancia in central New Mexico have been a matter of considerable debate, resulting in a wide range of estimates for Pleistocene precipitation and temperature in the southwestern United States. Using a simple hydrologic balance approach, Leopold (1951) calculated that precipitation was 50% greater than modern based on the assumption that summer temperatures were 9 ° C colder while winter temperatures were unchanged. In contrast, Galloway (1970) called on temperature decreases of 10-11 ° C throughout the year and a reduction in mean annual precipitation of 14% to raise Lake Estancia to its highstand. In still another study, Brakenridge suggested that highstands could be achieved through no change in precipitation if monthly temperatures were reduced by 7-8 ° C. Experiments with 3 physically-based, continuous-time models to simulate surface runoff (USDA Soil and Water Assessment Tool), groundwater flow (MODFLOW with LAK2 package), and lake evaporation (lake energy balance model of Hostetler and Bartlein, 1990) indicate that none of these proposed full glacial climate scenarios could have produced a highstand lake. In particular, previous workers appear to have overestimated the reduction in evaporation rates associated with their proposed temperature changes, suggesting that using empirical relationships between modern air temperature and evaporation to predict late Pleistocene evaporation is problematic. Furthermore, model-determined reductions in lake evaporation are insufficient to allow for lake expansion as suggested by Galloway and Brakenridge. Even under Leopold's assumption that precipitation increased by 50%, modeled runoff appears to be insufficient to raise Lake Estancia more than a few meters above the lake floor.

  13. Assumption Centred Modelling of Ecosystem Responses to CO2 at Six US Atmospheric CO2 Enrichment Experiments.

    Science.gov (United States)

    Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Luus, K. A.; Ryan, E.; Xia, J.; Norby, R. J.

    2015-12-01

    Plant photosynthetic rates increase and stomatal apertures decrease in response to elevated atmospheric CO[2] (eCO2), increasing both plant carbon (C) availability and water use efficiency. These physiological responses to eCO2 are well characterised and understood, however the ecological effects of these responses as they cascade through a suite of plant and ecosystem processes are complex and subject to multiple interactions and feedbacks. Therefore the response of the terrestrial carbon sink to increasing atmospheric CO[2] remains the largest uncertainty in global C cycle modelling to date, and is a huge contributor to uncertainty in climate change projections. Phase 2 of the FACE Model-Data Synthesis (FACE-MDS) project synthesises ecosystem observations from five long-term Free-Air CO[2] Enrichment (FACE) experiments and one open top chamber (OTC) experiment to evaluate the assumptions of a suite of terrestrial ecosystem models. The experiments are: The evergreen needleleaf Duke Forest FACE (NC), the deciduous broadleaf Oak Ridge FACE (TN), the prairie heating and FACE (WY), and the Nevada desert FACE, and the evergreen scrub oak OTC (FL). An assumption centered approach is being used to analyse: the interaction between eCO2 and water limitation on plant productivity; the interaction between eCO2 and temperature on plant productivity; whether increased rates of soil decomposition observed in many eCO2 experiments can account for model deficiencies in N uptake shown during Phase 1 of the FACE-MDS; and tracing carbon through the ecosystem to identify the exact cause of changes in ecosystem C storage.

  14. SIMPLEST DIFFERENTIAL EQUATION OF STOCK PRICE, ITS SOLUTION AND RELATION TO ASSUMPTION OF BLACK-SCHOLES MODEL

    Institute of Scientific and Technical Information of China (English)

    云天铨; 雷光龙

    2003-01-01

    Two kinds of mathematical expressions of stock price, one of which based on certain description is the solution of the simplest differential equation (S.D.E.) obtained by method similar to that used in solid mechanics, the other based on uncertain description (i. e., the statistic theory) is the assumption of Black-Scholes's model (A.B-S.M.) in which the density function of stock price obeys logarithmic normal distribution, can be shown to be completely the same under certain equivalence relation of coefficients. The range of the solution of S.D.E. has been shown to be suited only for normal cases (no profit, or lost profit news, etc.) of stock market, so the same range is suited for A. B-S. M. as well.

  15. Multiquadratic methods, collocation and kriging - comparison with geostatistical model assumptions; Multiquadratische Methode, Kollokation und Kriging - Vergleich unter geostatistischen Modellannahmen

    Energy Technology Data Exchange (ETDEWEB)

    Menz, J. [Technische Univ. Freiburg (Germany). Inst. fuer Markscheidewesen und Geodaesie; Bian Shaofeng [Technical Univ. of Surveying and Mapping, Wuhan (China)

    1998-10-01

    The contribution shows that Hardy`s multisquare method leads to results that are similar in their structure to the predictions by collocation. On the basis of geostatistical model assumptions, equations for calculating the prediction error are presented, and the multisquare method is compared with the collocation method on this basis. Equivalences between collocation and kriging are gone into, and information is presented on how predictions can be improved in the Bayesian sense. [Deutsch] In der folgenden Arbeit soll zuerst gezeigt werden, dass die Multiquadratische Methode nach HARDY zu Vorhersagen fuehrt, die in ihrer Struktur den Vorhersagen durch Kollokation entsprechen. Unter geostatistischen Modellannahmen werden nach dem Fehlerfortpflanzungsgesetz Formeln fuer die Berechnung der Vorhersagefehler angegeben. Auf der Grundlage dieser Formeln wird die Multiquadratische Methode mit der Kollokation verglichen. Es wird auf die Aequivalenzen zwischen Kollokation und Kriging verwiesen und angegeben, wie sich die Vorhersagen im BAYESschen Sinne verbessern lassen. (orig./MSK)

  16. Agent Based Multiviews Requirements Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Based on the current researches of viewpoints oriented requirements engineering and intelligent agent, we present the concept of viewpoint agent and its abstract model based on a meta-language for multiviews requirements engineering. It provided a basis for consistency checking and integration of different viewpoint requirements, at the same time, these checking and integration works can automatically realized in virtue of intelligent agent's autonomy, proactiveness and social ability. Finally, we introduce the practical application of the model by the case study of data flow diagram.

  17. Critical appraisal of assumptions in chains of model calculations used to project local climate impacts for adaptation decision support—the case of Baakse Beek

    Science.gov (United States)

    van der Sluijs, Jeroen P.; Arjan Wardekker, J.

    2015-04-01

    In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to

  18. Testing Earth System Model Assumptions of Photosynthetic Parameters with in situ Leaf Measurements from a Temperate Zone Forest.

    Science.gov (United States)

    Cheng, S. J.; Thomas, R. Q.; Wilkening, J. V.; Curtis, P.; Sharkey, T. D.; Nadelhoffer, K. J.

    2015-12-01

    Estimates of global land CO2 uptake vary widely across Earth system models. This uncertainty around model estimates of land-atmosphere CO2 fluxes may result from differences in how models parameterize and scale photosynthesis from the leaf-to-global level. To test model assumptions about photosynthesis, we derive rates of maximum carboxylation (Vc,max), electron transport (J), and triose phosphate utilization (TPU) from in situ leaf measurements from a forest representative of the Great Lakes region. Leaf-level gas exchange measurements were collected across a temperature range from sun and shade leaves of canopy-dominant tree species typically grouped into the same plant functional type. We evaluate the influence of short-term increases in leaf temperature, nitrogen per leaf area (Narea), species, and leaf light environment on Vc,max, J, and TPU by testing contrasting model equations that isolate the influence of these factors on these rate-limiting steps in leaf photosynthesis. Results indicate that patterns in Vc,max are best explained by a model that includes temperature and Narea. However, J varied with species and leaf light environment in addition to temperature. TPU also varied with leaf light environment and possibly with temperature. These variations in J and TPU with species or between sun and shade leaves suggest that plant traits outside of Narea are needed to explain patterns in J and TPU. This study provides in situ evidence on how Vc,max, J, and TPU vary within a forest canopy and highlight how leaf responses to changes in climate, forest species composition, and canopy structure may alter forest CO2 uptake.

  19. Early Validation of Automation Plant Control Software using Simulation Based on Assumption Modeling and Validation Use Cases

    Directory of Open Access Journals (Sweden)

    Veronika Brandstetter

    2015-10-01

    Full Text Available In automation plants, technical processes must be conducted in a way that products, substances, or services are produced reliably, with sufficient quality and with minimal strain on resources. A key driver in conducting these processes is the automation plant’s control software, which controls the technical plant components and thereby affects the physical, chemical, and mechanical processes that take place in automation plants. To this end, the control software of an automation plant must adhere to strict process requirements arising from the technical processes, and from the physical plant design. Currently, the validation of the control software often starts late in the engineering process in many cases – once the automation plant is almost completely constructed. However, as widely acknowledged, the later the control software of the automation plant is validated, the higher the effort for correcting revealed defects is, which can lead to serious budget overruns and project delays. In this article we propose an approach that allows the early validation of automation control software against the technical plant processes and assumptions about the physical plant design by means of simulation. We demonstrate the application of our approach on the example of an actual plant project from the automation industry and present it’s technical implementation

  20. Relevance of collisionality in the transport model assumptions for divertor detachment multi-fluid modelling on JET

    DEFF Research Database (Denmark)

    Wiesen, S.; Fundamenski, W.; Wischmeier, M.

    2011-01-01

    A revised formulation of the perpendicular diffusive transport model in 2D multi-fluid edge codes is proposed. Based on theoretical predictions and experimental observations a dependence on collisionality is introduced into the transport model of EDGE2D–EIRENE. The impact on time-dependent JET ga...

  1. Disastrous assumptions about community disasters

    Energy Technology Data Exchange (ETDEWEB)

    Dynes, R.R. [Univ. of Delaware, Newark, DE (United States). Disaster Research Center

    1995-12-31

    Planning for local community disasters is compounded with erroneous assumptions. Six problematic models are identified: agent facts, big accident, end of the world, media, command and control, administrative. Problematic assumptions in each of them are identified. A more adequate model centered on problem solving is identified. That there is a discrepancy between disaster planning efforts and the actual response experience seems rather universal. That discrepancy is symbolized by the graffiti which predictably surfaces on many walls in post disaster locations -- ``First the earthquake, then the disaster.`` That contradiction is seldom reduced as a result of post disaster critiques, since the most usual conclusion is that the plan was adequate but the ``people`` did not follow it. Another explanation will be provided here. A more plausible explanation for failure is that most planning efforts adopt a number of erroneous assumptions which affect the outcome. Those assumptions are infrequently changed or modified by experience.

  2. Liquid and Ice Cloud Microphysics in the CSU General Circulation Model. Part III: Sensitivity to Modeling Assumptions.

    Science.gov (United States)

    Fowler, Laura D.; Randall, David A.

    1996-03-01

    The inclusion of cloud microphysical processes in general circulation models makes it possible to study the multiple interactions among clouds, the hydrological cycle, and radiation. The gaps between the temporal and spatial scales at which such cloud microphysical processes work and those at which general circulation models presently function force climate modelers to crudely parameterize and simplify the various interactions among the different water species (namely, water vapor, cloud water, cloud ice, rain, and snow) and to use adjustable parameters to which large-scale models can be highly sensitive. Accordingly, the authors have investigated the sensitivity of the climate, simulated with the Colorado State University general circulation model, to various aspects of the parameterization of cloud microphysical processes and its interactions with the cumulus convection and radiative transfer parameterizations.The results of 120-day sensitivity experiments corresponding to perpetual January conditions have been compared with those of a control simulation in order to 1 ) determine the importance of advecting cloud water, cloud ice, rain, and snow at the temporal and spatial scale resolutions presently used in the model; 2) study the importance of the formation of extended stratiform anvils at the tops of cumulus towers, 3) analyze the role of mixed-phase clouds in determining the partitioning among cloud water, cloud ice, rain, and snow and, hence, their impacts on the simulated cloud optical properties; 4) evaluate the sensitivity of the atmospheric moisture budget and precipitation rates to a change in the fall velocities of rain and snow; 5) determine the model's sensitivity to the prescribed thresholds of autoconversion of cloud water to rain and cloud ice to snow; and 6) study the impact of the collection of supercooled cloud water by snow, as well as accounting for the cloud optical properties of snow.Results are presented in terms of 30-day mean differences

  3. Environment Assumptions for Synthesis

    CERN Document Server

    Chatterjee, Krishnendu; Jobstmann, Barbara

    2008-01-01

    The synthesis problem asks to construct a reactive finite-state system from an $\\omega$-regular specification. Initial specifications are often unrealizable, which means that there is no system that implements the specification. A common reason for unrealizability is that assumptions on the environment of the system are incomplete. We study the problem of correcting an unrealizable specification $\\phi$ by computing an environment assumption $\\psi$ such that the new specification $\\psi\\to\\phi$ is realizable. Our aim is to construct an assumption $\\psi$ that constrains only the environment and is as weak as possible. We present a two-step algorithm for computing assumptions. The algorithm operates on the game graph that is used to answer the realizability question. First, we compute a safety assumption that removes a minimal set of environment edges from the graph. Second, we compute a liveness assumption that puts fairness conditions on some of the remaining environment edges. We show that the problem of findi...

  4. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...

  5. Hawaiian forest bird trends: using log-linear models to assess long-term trends is supported by model diagnostics and assumptions (reply to Freed and Cann 2013)

    Science.gov (United States)

    Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.

    2014-01-01

    Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.

  6. Linking assumptions in amblyopia

    Science.gov (United States)

    LEVI, DENNIS M.

    2017-01-01

    Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956

  7. Test of Poisson Failure Assumption.

    Science.gov (United States)

    1982-09-01

    o. ....... 37 00/ D itlr.: DVI r TEST OF POISSON FAILURE ASSUMPTION Chapter 1. INTRODUCTION 1.1 Background. In stockage models... precipitates a regular failure pattern; it is also possible that the coding of scheduled vs unscheduled does not reflect what we would expect. Data

  8. Radiation Belt and Plasma Model Requirements

    Science.gov (United States)

    Barth, Janet L.

    2005-01-01

    Contents include the following: Radiation belt and plasma model environment. Environment hazards for systems and humans. Need for new models. How models are used. Model requirements. How can space weather community help?

  9. Distributional Assumptions in Educational Assessments Analysis: Normal Distributions versus Generalized Beta Distribution in Modeling the Phenomenon of Learning

    Science.gov (United States)

    Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire

    2013-01-01

    This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…

  10. Limitations of individual causal models, causal graphs, and ignorability assumptions, as illustrated by random confounding and design unfaithfulness.

    Science.gov (United States)

    Greenland, Sander; Mansournia, Mohammad Ali

    2015-10-01

    We describe how ordinary interpretations of causal models and causal graphs fail to capture important distinctions among ignorable allocation mechanisms for subject selection or allocation. We illustrate these limitations in the case of random confounding and designs that prevent such confounding. In many experimental designs individual treatment allocations are dependent, and explicit population models are needed to show this dependency. In particular, certain designs impose unfaithful covariate-treatment distributions to prevent random confounding, yet ordinary causal graphs cannot discriminate between these unconfounded designs and confounded studies. Causal models for populations are better suited for displaying these phenomena than are individual-level models, because they allow representation of allocation dependencies as well as outcome dependencies across individuals. Nonetheless, even with this extension, ordinary graphical models still fail to capture distinctions between hypothetical superpopulations (sampling distributions) and observed populations (actual distributions), although potential-outcome models can be adapted to show these distinctions and their consequences.

  11. Modelling N2O dynamics in the engineered N cycle: Observations, assumptions, knowns, and unknowns

    DEFF Research Database (Denmark)

    Smets, Barth F.; Pellicer i Nàcher, Carles; Jensen, Marlene Mark;

    of the main microbial processes responsible for its production and consumption. The conceptualization of these pathways in mathematical models has the potential to become a key tool to increase our understanding on the complex interrelationships within these ecosystems and develop strategies to minimize...... the carbon footprint of wastewater treatment plants. Unfortunately, existing model structures are limited to describe the emissions of individual microbial pathways in an attempt to decrease their complexity and facilitate their calibration. The present contribution summarizes the recent developments...

  12. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...... no assumptionson the missingness mechanism, for which we use our recently proposed AI \\& M algorithm. We present experimental results on synthetic data that show that our approximate test statistic is a good indicator for whether data is mar relative to the given distributional assumptions....

  13. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    Science.gov (United States)

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  14. "A violation of the conditional independence assumption in the two-high-threshold Model of recognition memory": Correction to Chen, Starns, and Rotello (2015).

    Science.gov (United States)

    2016-01-01

    Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In

  15. A Test of Three Basic Assumptions of Situational Leadership® II Model and Their Implications for HRD Practitioners

    Science.gov (United States)

    Zigarmi, Drea; Roberts, Taylor Peyton

    2017-01-01

    Purpose: This study aims to test the following three assertions underlying the Situational Leadership® II (SLII) Model: all four leadership styles are received by followers; all four leadership styles are needed by followers; and if there is a fit between the leadership style a follower receives and needs, that follower will demonstrate favorable…

  16. Simulating Star Clusters with the AMUSE Software Framework: I. Dependence of Cluster Lifetimes on Model Assumptions and Cluster Dissolution Modes

    CERN Document Server

    Whitehead, Alfred J; Vesperini, Enrico; Zwart, Simon Portegies

    2013-01-01

    We perform a series of simulations of evolving star clusters using AMUSE (the Astrophysical Multipurpose Software Environment), a new community-based multi-physics simulation package, and compare our results to existing work. These simulations model a star cluster beginning with a King model distribution and a selection of power-law initial mass functions, and contain a tidal cut-off. They are evolved using collisional stellar dynamics and include mass loss due to stellar evolution. After determining that the differences between AMUSE results and prior publications are understood, we explored the variation in cluster lifetimes due to the random realization noise introduced by transforming a King model to specific initial conditions. This random realization noise can affect the lifetime of a simulated star cluster by up to 30%. Two modes of star cluster dissolution were identified: a mass evolution curve that contains a run-away cluster dissolution with a sudden loss of mass, and a dissolution mode that does n...

  17. A Framework for Modelling Software Requirements

    Directory of Open Access Journals (Sweden)

    Dhirendra Pandey

    2011-05-01

    Full Text Available Requirement engineering plays an important role in producing quality software products. In recent past years, some approaches of requirement framework have been designed to provide an end-to-end solution for system development life cycle. Textual requirements specifications are difficult to learn, design, understand, review, and maintain whereas pictorial modelling is widely recognized as an effective requirement analysis tool. In this paper, we will present a requirement modelling framework with the analysis of modern requirements modelling techniques. Also, we will discuss various domains of requirement engineering with the help of modelling elements such as semantic map of business concepts, lifecycles of business objects, business processes, business rules, system context diagram, use cases and their scenarios, constraints, and user interface prototypes. The proposed framework will be illustrated with the case study of inventory management system.

  18. Estimating the position of illuminants in paintings under weak model assumptions: an application to the works of two Baroque masters

    Science.gov (United States)

    Kale, David; Stork, David G.

    2009-02-01

    The problems of estimating the position of an illuminant and the direction of illumination in realist paintings have been addressed using algorithms from computer vision. These algorithms fall into two general categories: In model-independent methods (cast-shadow analysis, occluding-contour analysis, ...), one does not need to know or assume the three-dimensional shapes of the objects in the scene. In model-dependent methods (shape-fromshading, full computer graphics synthesis, ...), one does need to know or assume the three-dimensional shapes. We explore the intermediate- or weak-model condition, where the three-dimensional object rendered is so simple one can very confidently assume its three-dimensional shape and, further, that this shape admits an analytic derivation of the appearance model. Specifically, we can assume that floors and walls are flat and that they are horizontal and vertical, respectively. We derived the maximum-likelihood estimator for the two-dimensional spatial location of a point source in an image as a function of the pattern of brightness (or grayscale value) over such a planar surface. We applied our methods to two paintings of the Baroque, paintings for which the question of the illuminant position is of interest to art historians: Georges de la Tour's Christ in the carpenter's studio (1645) and Caravaggio's The calling of St. Matthew (1599-1600). Our analyses show that a single point source (somewhat near to the depicted candle) is a slightly better explanation of the pattern of brightness on the floor in Christ than are two point sources, one in place of each of the figures. The luminance pattern on the rear wall in The calling implies the source is local, a few meters outside the picture frame-not the infinitely distant sun. Both results are consistent with previous rebuttals of the recent art historical claim that these paintings were executed by means of tracing optically projected images. Our method is the first application of such

  19. Modeling probability and additive summation for detection across multiple mechanisms under the assumptions of signal detection theory.

    Science.gov (United States)

    Kingdom, Frederick A A; Baldwin, Alex S; Schmidtmann, Gunnar

    2015-01-01

    Many studies have investigated how multiple stimuli combine to reach threshold. There are broadly speaking two ways this can occur: additive summation (AS) where inputs from the different stimuli add together in a single mechanism, or probability summation (PS) where different stimuli are detected independently by separate mechanisms. PS is traditionally modeled under high threshold theory (HTT); however, tests have shown that HTT is incorrect and that signal detection theory (SDT) is the better framework for modeling summation. Modeling the equivalent of PS under SDT is, however, relatively complicated, leading many investigators to use Monte Carlo simulations for the predictions. We derive formulas that employ numerical integration to predict the proportion correct for detecting multiple stimuli assuming PS under SDT, for the situations in which stimuli are either equal or unequal in strength. Both formulas are general purpose, calculating performance for forced-choice tasks with M alternatives, n stimuli, in Q monitored mechanisms, each subject to a non-linear transducer with exponent τ. We show how the probability (and additive) summation formulas can be used to simulate psychometric functions, which when fitted with Weibull functions make signature predictions for how thresholds and psychometric function slopes vary as a function of τ, n, and Q. We also show how one can fit the formulas directly to real psychometric functions using data from a binocular summation experiment, and show how one can obtain estimates of τ and test whether binocular summation conforms more to PS or AS. The methods described here can be readily applied using software functions newly added to the Palamedes toolbox.

  20. Evaluation of a measure on the quasi-steady state assumption of Collisional Radiative Models via Intrinsic Low Dimensional Manifold Technique

    CERN Document Server

    Kemaneci, Efe; Graef, Wouter; van Dijk, Jan; Kroesen, Gerrit M W

    2015-01-01

    Collisional and radiative dynamics of a plasma is exposed by so-called Collisional Radiative Models [1] that simplify the chemical kinetics by quasi-steady state assignment on certain types of particles. The assignment is conventionally based on the classification of the plasma species by the ratio of the transport to the local destruction frequencies. We show that the classification is not exact due to the role of the time-dependent local production, and a measure is necessary to confirm the validity of the assignment. The main goal of this study is to evaluate a measure on the quasi-steady state assumptions of these models. Inspired by a chemical reduction technique called Intrinsic Low Dimensional Manifolds [2, 3], an estimate local source is provided at the transport time-scale. This source is a deviation from the quasi-steady state for the particle and its value is assigned as an error of the quasi-steady state assumption. The propagation of this error on the derived quantities is formulated in the Colli...

  1. Modeling Requirements for Cohort and Register IT.

    Science.gov (United States)

    Stäubert, Sebastian; Weber, Ulrike; Michalik, Claudia; Dress, Jochen; Ngouongo, Sylvie; Stausberg, Jürgen; Winter, Alfred

    2016-01-01

    The project KoRegIT (funded by TMF e.V.) aimed to develop a generic catalog of requirements for research networks like cohort studies and registers (KoReg). The catalog supports such kind of research networks to build up and to manage their organizational and IT infrastructure. To make transparent the complex relationships between requirements, which are described in use cases from a given text catalog. By analyzing and modeling the requirements a better understanding and optimizations of the catalog are intended. There are two subgoals: a) to investigate one cohort study and two registers and to model the current state of their IT infrastructure; b) to analyze the current state models and to find simplifications within the generic catalog. Processing the generic catalog was performed by means of text extraction, conceptualization and concept mapping. Then methods of enterprise architecture planning (EAP) are used to model the extracted information. To work on objective a) questionnaires are developed by utilizing the model. They are used for semi-structured interviews, whose results are evaluated via qualitative content analysis. Afterwards the current state was modeled. Objective b) was done by model analysis. A given generic text catalog of requirements was transferred into a model. As result of objective a) current state models of one existing cohort study and two registers are created and analyzed. An optimized model called KoReg-reference-model is the result of objective b). It is possible to use methods of EAP to model requirements. This enables a better overview of the partly connected requirements by means of visualization. The model based approach also enables the analysis and comparison of the empirical data from the current state models. Information managers could reduce the effort of planning the IT infrastructure utilizing the KoReg-reference-model. Modeling the current state and the generation of reports from the model, which could be used as

  2. An Extended Analysis of Requirements Traceability Model

    Institute of Scientific and Technical Information of China (English)

    Jiang Dandong(蒋丹东); Zhang Shensheng; Chen Lu

    2004-01-01

    A new extended meta model of traceability is presented. Then, a formalized fine-grained model of traceability is described. Some major issues about this model, including trace units, requirements and relations within the model, are further analyzed. Finally, a case study that comes from a key project of 863 Program is given.

  3. Long-term dynamics simulation: Modeling requirements

    Energy Technology Data Exchange (ETDEWEB)

    Morched, A.S.; Kar, P.K.; Rogers, G.J.; Morison, G.K. (Ontario Hydro, Toronto, ON (Canada))

    1989-12-01

    This report details the required performance and modelling capabilities of a computer program intended for the study of the long term dynamics of power systems. Following a general introduction which outlines the need for long term dynamic studies, the modelling requirements for the conduct of such studies is discussed in detail. Particular emphasis is placed on models for system elements not normally modelled in power system stability programs, which will have a significant impact in the long term time frame of minutes to hours following the initiating disturbance. The report concludes with a discussion of the special computational and programming requirements for a long term stability program. 43 refs., 36 figs.

  4. Examining Computational Assumptions For Godiva IV

    Energy Technology Data Exchange (ETDEWEB)

    Kirkland, Alexander Matthew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jaegers, Peter James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-11

    Over the course of summer 2016, the effects of several computational modeling assumptions with respect to the Godiva IV reactor were examined. The majority of these assumptions pertained to modeling errors existing in the control rods and burst rod. The Monte Carlo neutron transport code, MCNP, was used to investigate these modeling changes, primarily by comparing them to that of the original input deck specifications.

  5. Requirements for clinical information modelling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak

    2015-07-01

    This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  7. Modelling of organic aerosols over Europe (2002–2007 using a volatility basis set (VBS framework with application of different assumptions regarding the formation of secondary organic aerosol

    Directory of Open Access Journals (Sweden)

    D. Simpson

    2012-02-01

    Full Text Available A new organic aerosol (OA module has been implemented into the EMEP chemical transport model. Four different volatility basis set (VBS schemes have been tested in long-term simulations for Europe, covering the six years 2002–2007. Different assumptions regarding partitioning of primary OA (POA and aging of POA and secondary OA (SOA, have been explored. Model results are compared to filter measurements, AMS-data and source-apportionment studies, as well as to other model studies. The present study indicates that many different sources contribute significantly to OA in Europe. Fossil POA and oxidised POA, biogenic and anthropogenic SOA (BSOA and ASOA, residential burning of biomass fuels and wildfire emissions may all contribute more than 10% each over substantial parts of Europe. Simple VBS based OA models can give reasonably good results for summer OA but more observational studies are needed to constrain the VBS parameterisations and to help improve emission inventories. The volatility distribution of primary emissions is an important issue for further work. This study shows smaller contributions from BSOA to OA in Europe than earlier work, but relatively greater ASOA. BVOC emissions are highly uncertain and need further validation. We can not reproduce winter levels of OA in Europe, and there are many indications that the present emission inventories substantially underestimate emissions from residential wood burning in large parts of Europe.

  8. A Requirements Analysis Model Based on QFD

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi-wei; Nelson K.H.Tang

    2004-01-01

    The enterprise resource planning (ERP) system has emerged to offer an integrated IT solution and more and more enterprises are increasing by adopting this system and regarding it as an important innovation. However, there is already evidence of high failure risks in ERP project implementation, one major reason is poor analysis of the requirements for system implementation. In this paper, the importance of requirements analysis for ERP project implementation is highlighted, and a requirements analysis model by applying quality function deployment (QFD) is presented, which will support to conduct requirements analysis for ERP project.

  9. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  10. 39 Questionable Assumptions in Modern Physics

    Science.gov (United States)

    Volk, Greg

    2009-03-01

    The growing body of anomalies in new energy, low energy nuclear reactions, astrophysics, atomic physics, and entanglement, combined with the failure of the Standard Model and string theory to predict many of the most basic fundamental phenomena, all point to a need for major new paradigms. Not Band-Aids, but revolutionary new ways of conceptualizing physics, in the spirit of Thomas Kuhn's The Structure of Scientific Revolutions. This paper identifies a number of long-held, but unproven assumptions currently being challenged by an increasing number of alternative scientists. Two common themes, both with venerable histories, keep recurring in the many alternative theories being proposed: (1) Mach's Principle, and (2) toroidal, vortex particles. Matter-based Mach's Principle differs from both space-based universal frames and observer-based Einsteinian relativity. Toroidal particles, in addition to explaining electron spin and the fundamental constants, satisfy the basic requirement of Gauss's misunderstood B Law, that motion itself circulates. Though a comprehensive theory is beyond the scope of this paper, it will suggest alternatives to the long list of assumptions in context.

  11. Experimental validation and effect of modelling assumptions in the hierarchical multi-scale simulation of the cup drawing of AA6016 sheets

    Science.gov (United States)

    Ramírez, M. A.; Schouwenaars, R.; Eyckens, P.; Gawad, J.; Kestens, L.; Van Bael, A.; Van Houtte, P.

    2017-01-01

    An essential step in the improvement of design strategies for a wide range of industrial deep drawing applications is the development of methods which allow for the precise prediction of shape and processing parameters. Earlier work has demonstrated, in a clear but qualitative manner, the capabilities of the hierarchical multiscale (HMS) model, which predicts the anisotropic plastic properties of metallic materials based on a statistical analysis of microstructure-based anisotropy and a continuous description of the yield locus. The method is implemented into the ABAQUS finite-element software but, until recently, little attention had been paid to other factors which determine the accuracy of a finite element prediction in general, such as mesh size, friction coefficient and rigid/elastic modelling of the tools. Through the analysis of cup drawing, which is a well-established laboratory-scale test relevant to industrial applications, a quantitative comparison is provided between measured cup geometry and punch force and modelling results for commercial AA6016T4 aluminium sheets. The relatively weak earing behaviour of these materials serves to emphasise the small differences still found between model and experiment, which may be addressed by future refinement of the micromechanical component of the HMS. Average cup height and punch force, which is an important process parameter omitted in earlier studies, depend primarily on the friction coefficient and assumptions in the modelling of the tools. Considering the balance between accuracy and precision, it is concluded that the proposed methodology has matured sufficiently to be used as a design tool at industrial level.

  12. A fuzzy model for exploiting customer requirements

    Directory of Open Access Journals (Sweden)

    Zahra Javadirad

    2016-09-01

    Full Text Available Nowadays, Quality function deployment (QFD is one of the total quality management tools, where customers’ views and requirements are perceived and using various techniques improves the production requirements and operations. The QFD department, after identification and analysis of the competitors, takes customers’ feedbacks to meet the customers’ demands for the products compared with the competitors. In this study, a comprehensive model for assessing the importance of the customer requirements in the products or services for an organization is proposed. The proposed study uses linguistic variables, as a more comprehensive approach, to increase the precision of the expression evaluations. The importance of these requirements specifies the strengths and weaknesses of the organization in meeting the requirements relative to competitors. The results of these experiments show that the proposed method performs better than the other methods.

  13. New Cryptosystem Using Multiple Cryptographic Assumptions

    Directory of Open Access Journals (Sweden)

    E. S. Ismail

    2011-01-01

    Full Text Available Problem statement: A cryptosystem is a way for a sender and a receiver to communicate digitally by which the sender can send receiver any confidential or private message by first encrypting it using the receiver’s public key. Upon receiving the encrypted message, the receiver can confirm the originality of the message’s contents using his own secret key. Up to now, most of the existing cryptosystems were developed based on a single cryptographic assumption like factoring, discrete logarithms, quadratic residue or elliptic curve discrete logarithm. Although these schemes remain secure today, one day in a near future they may be broken if one finds a polynomial algorithm that can efficiently solve the underlying cryptographic assumption. Approach: By this motivation, we designed a new cryptosystem based on two cryptographic assumptions; quadratic residue and discrete logarithms. We integrated these two assumptions in our encrypting and decrypting equations so that the former depends on one public key whereas the latter depends on one corresponding secret key and two secret numbers. Each of public and secret keys in our scheme determines the assumptions we use. Results: The newly developed cryptosystem is shown secure against the three common considering algebraic attacks using a heuristic security technique. The efficiency performance of our scheme requires 2Texp+2Tmul +Thash time complexity for encryption and Texp+2Tmul+Tsrt time complexity for decryption and this magnitude of complexity is considered minimal for multiple cryptographic assumptions-like cryptosystems. Conclusion: The new cryptosystem based on multiple cryptographic assumptions offers a greater security level than that schemes based on a single cryptographic assumption. The adversary has to solve the two assumptions simultaneously to recover the original message from the received corresponding encrypted message but this is very unlikely to happen.

  14. On Assumptions in Development of a Mathematical Model of Thermo-gravitational Convection in the Large Volume Process Tanks Taking into Account Fermentation

    Directory of Open Access Journals (Sweden)

    P. M. Shkapov

    2015-01-01

    Full Text Available The paper provides a mathematical model of thermo-gravity convection in a large volume vertical cylinder. The heat is removed from the product via the cooling jacket at the top of the cylinder. We suppose that a laminar fluid motion takes place. The model is based on the NavierStokes equation, the equation of heat transfer through the wall, and the heat transfer equation. The peculiarity of the process in large volume tanks was the distribution of the physical parameters of the coordinates that was taken into account when constructing the model. The model corresponds to a process of wort beer fermentation in the cylindrical-conical tanks (CCT. The CCT volume is divided into three zones and for each zone model equations was obtained. The first zone has an annular cross-section and it is limited to the height by the cooling jacket. In this zone the heat flow from the cooling jacket to the product is uppermost. Model equation of the first zone describes the process of heat transfer through the wall and is presented by linear inhomogeneous differential equation in partial derivatives that is solved analytically. For the second and third zones description there was a number of engineering assumptions. The fluid was considered Newtonian, viscous and incompressible. Convective motion considered in the Boussinesq approximation. The effect of viscous dissipation is not considered. The topology of fluid motion is similar to the cylindrical Poiseuille. The second zone model consists of the Navier-Stokes equations in cylindrical coordinates with the introduction of a simplified and the heat equation in the liquid layer. The volume that is occupied by an upward convective flow pertains to the third area. Convective flows do not mix and do not exchange heat. At the start of the process a medium has the same temperature and a zero initial velocity in the whole volume that allows us to specify the initial conditions for the process. The paper shows the

  15. Effect of Selected Modeling Assumptions on Subsurface Radionuclide Transport Projections for the Potential Environmental Management Disposal Facility at Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    Painter, Scott L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division

    2016-06-28

    The Department of Energy’s Office of Environmental Management recently revised a Remedial Investigation/ Feasibility Study (RI/FS) that included an analysis of subsurface radionuclide transport at a potential new Environmental Management Disposal Facility (EMDF) in East Bear Creek Valley near Oak Ridge, Tennessee. The effect of three simplifying assumptions used in the RI/FS analyses are investigated using the same subsurface pathway conceptualization but with more flexible modeling tools. Neglect of vadose zone dispersion was found to be conservative or non-conservative, depending on the retarded travel time and the half-life. For a given equilibrium distribution coefficient, a relatively narrow range of half-life was identified for which neglect of vadose zone transport is non-conservative and radionuclide discharge into surface water is non-negligible. However, there are two additional conservative simplifications in the reference case that compensate for the non-conservative effect of neglecting vadose zone dispersion: the use of a steady infiltration rate and vadose zone velocity, and the way equilibrium sorption is used to represent transport in the fractured material of the saturated aquifer. With more realistic representations of all three processes, the RI/FS reference case was found to either provide a reasonably good approximation to the peak concentration or was significantly conservative (pessimistic) for all parameter combinations considered.

  16. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  17. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  18. Modeling requirements for in situ vitrification

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom.

  19. The Fitness of Assumptions and an Alternative Model for Funding the Public Sector Pension Scheme: The Case of Rio Grande do Sul

    Directory of Open Access Journals (Sweden)

    Paulo Roberto Caldart

    2014-12-01

    Full Text Available The research presented herein has two objectives. First, this study will test whether actuarial assumptions for public sector pension schemes in Brazil adhere to reality and whether changing these assumptions might affect the results, particularly with respect to life tables and wage growth assumptions. The paper shows that the best fit life table is AT 2000 for males aggregated by one year, which involves a longer life expectancy than the life table proposed under current legislation (IBGE 2009. The data also show that actual wage growth was 4.59% per year from 2002 to 2012, as opposed to the 1% wage increase proposed by the same legislation. Changing these two assumptions increases the actuarial imbalance for a representative individual by 18.17% after accounting for the adjusted life table or by 98.30% after revising the wage growth assumption. With respect to its second objective, this paper proposes alternative funding mechanisms in which the local pension scheme will provide the funded component of the benefit that would be complemented by local government in a pay-as-you-go manner. The database utilized was for the state of Rio Grande do Sul in the month of November 2011. The results are thus restricted to Rio Grande do Sul.

  20. Understanding requirements via natural language information modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J.K.; Becker, S.D.

    1993-07-01

    Information system requirements that are expressed as simple English sentences provide a clear understanding of what is needed between system specifiers, administrators, users, and developers of information systems. The approach used to develop the requirements is the Natural-language Information Analysis Methodology (NIAM). NIAM allows the processes, events, and business rules to be modeled using natural language. The natural language presentation enables the people who deal with the business issues that are to be supported by the information system to describe exactly the system requirements that designers and developers will implement. Computer prattle is completely eliminated from the requirements discussion. An example is presented that is based upon a section of a DOE Order involving nuclear materials management. Where possible, the section is analyzed to specify the process(es) to be done, the event(s) that start the process, and the business rules that are to be followed during the process. Examples, including constraints, are developed. The presentation steps through the modeling process and shows where the section of the DOE Order needs clarification, extensions or interpretations that could provide a more complete and accurate specification.

  1. Roy's specific life values and the philosophical assumption of humanism.

    Science.gov (United States)

    Hanna, Debra R

    2013-01-01

    Roy's philosophical assumption of humanism, which is shaped by the veritivity assumption, is considered in terms of her specific life values and in contrast to the contemporary view of humanism. Like veritivity, Roy's philosophical assumption of humanism unites a theocentric focus with anthropological values. Roy's perspective enriches the mainly secular, anthropocentric assumption. In this manuscript, the basis for Roy's perspective of humanism will be discussed so that readers will be able to use the Roy adaptation model in an authentic manner.

  2. Modeling and stabilization results for a charge or current-actuated active constrained layer (ACL) beam model with the electrostatic assumption

    Science.gov (United States)

    Özer, Ahmet Özkan

    2016-04-01

    An infinite dimensional model for a three-layer active constrained layer (ACL) beam model, consisting of a piezoelectric elastic layer at the top and an elastic host layer at the bottom constraining a viscoelastic layer in the middle, is obtained for clamped-free boundary conditions by using a thorough variational approach. The Rao-Nakra thin compliant layer approximation is adopted to model the sandwich structure, and the electrostatic approach (magnetic effects are ignored) is assumed for the piezoelectric layer. Instead of the voltage actuation of the piezoelectric layer, the piezoelectric layer is proposed to be activated by a charge (or current) source. We show that, the closed-loop system with all mechanical feedback is shown to be uniformly exponentially stable. Our result is the outcome of the compact perturbation argument and a unique continuation result for the spectral problem which relies on the multipliers method. Finally, the modeling methodology of the paper is generalized to the multilayer ACL beams, and the uniform exponential stabilizability result is established analogously.

  3. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    2008-01-01

    in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...

  4. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.

    2015-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.

  5. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    2008-01-01

    thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...... in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...

  6. Integrated modelling requires mass collaboration (Invited)

    Science.gov (United States)

    Moore, R. V.

    2009-12-01

    The need for sustainable solutions to the world’s problems is self evident; the challenge is to anticipate where, in the environment, economy or society, the proposed solution will have negative consequences. If we failed to realise that the switch to biofuels would have the seemingly obvious result of reduced food production, how much harder will it be to predict the likely impact of policies whose impacts may be more subtle? It has been clear for a long time that models and data will be important tools for assessing the impact of events and the measures for their mitigation. They are an effective way of encapsulating knowledge of a process and using it for prediction. However, most models represent a single or small group of processes. The sustainability challenges that face us now require not just the prediction of a single process but the prediction of how many interacting processes will respond in given circumstances. These processes will not be confined to a single discipline but will often straddle many. For example, the question, “What will be the impact on river water quality of the medical plans for managing a ‘flu pandemic and could they cause a further health hazard?” spans medical planning, the absorption of drugs by the body, the spread of disease, the hydraulic and chemical processes in sewers and sewage treatment works and river water quality. This question nicely reflects the present state of the art. We have models of the processes and standards, such as the Open Modelling Interface (the OpenMI), allow them to be linked together and to datasets. We can therefore answer the question but with the important proviso that we thought to ask it. The next and greater challenge is to deal with the open question, “What are the implications of the medical plans for managing a ‘flu pandemic?”. This implies a system that can make connections that may well not have occurred to us and then evaluate their probable impact. The final touch will be to

  7. Sampling Assumptions in Inductive Generalization

    Science.gov (United States)

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  8. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano; Lucassen, Garm; Brinkkemper, Sjaak

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to

  9. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano; Lucassen, Garm; Brinkkemper, Sjaak

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to evaluat

  10. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano|info:eu-repo/dai/nl/369508394; Lucassen, Garm; Brinkkemper, Sjaak|info:eu-repo/dai/nl/07500707X

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to evaluat

  11. Exposing Trust Assumptions in Distributed Policy Enforcement (Briefing Charts)

    Science.gov (United States)

    2016-06-21

    Coordinated defenses appear to be feasible • Writing policies from scratch is hard – Exposing assumptions requires people to think about what assumptions... critical capabilities as: – Adaptation to dynamic service availability – Complex situational dynamics (e.g., differentiating between bot-net and

  12. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Science.gov (United States)

    2010-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of... this part must be based on methods and assumptions that are reasonable in the aggregate, based on...

  13. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  14. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  15. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, Wilco; Jonkers, Henk; Sinderen, van Marten

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for enterpris

  16. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, W.; Jonkers, Henk; van Sinderen, Marten J.

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for

  17. Modern Cosmology: Assumptions and Limits

    Science.gov (United States)

    Hwang, Jai-Chan

    2012-06-01

    Physical cosmology tries to understand the Universe at large with its origin and evolution. Observational and experimental situations in cosmology do not allow us to proceed purely based on the empirical means. We examine in which sense our cosmological assumptions in fact have shaped our current cosmological worldview with consequent inevitable limits. Cosmology, as other branches of science and knowledge, is a construct of human imagination reflecting the popular belief system of the era. The question at issue deserves further philosophic discussions. In Whitehead's words, ``philosophy, in one of its functions, is the critic of cosmologies.'' (Whitehead 1925).

  18. Modern Cosmology: Assumptions and Limits

    CERN Document Server

    Hwang, Jai-chan

    2012-01-01

    Physical cosmology tries to understand the Universe at large with its origin and evolution. Observational and experimental situations in cosmology do not allow us to proceed purely based on the empirical means. We examine in which sense our cosmological assumptions in fact have shaped our current cosmological worldview with consequent inevitable limits. Cosmology, as other branches of science and knowledge, is a construct of human imagination reflecting the popular belief system of the era. The question at issue deserves further philosophic discussions. In Whitehead's words, "philosophy, in one of its functions, is the critic of cosmologies". (Whitehead 1925)

  19. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention

    OpenAIRE

    Nielsen, Simon; Wilms, L. Inge

    2015-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60–75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path a...

  20. DECISION MAKING MODELING OF CONCRETE REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Suhartono Irawan

    2001-01-01

    Full Text Available This paper presents the results of an experimental evaluation between predicted and practice concrete strength. The scope of the evaluation is the optimisation of the cement content for different concrete grades as a result of bringing the target mean value of tests cubes closer to the required characteristic strength value by reducing the standard deviation. Abstract in Bahasa Indonesia : concrete+mix+design%2C+acceptance+control%2C+optimisation%2C+cement+content.

  1. Challenged assumptions and invisible effects

    DEFF Research Database (Denmark)

    Wimmelmann, Camilla Lawaetz; Vitus, Kathrine; Jervelund, Signe Smith

    2017-01-01

    for the implementation—different from the assumed conditions—not only challenge the implementation of the intervention but also potentially produce unanticipated yet valuable effects. Research implications – Newly arrived immigrants represent a hugely diverse and heterogeneous group of people with differing values...... of two complete intervention courses and an analysis of the official intervention documents. Findings – This case study exemplifies how the basic normative assumptions behind an immigrant-oriented intervention and the intrinsic power relations therein may be challenged and negotiated by the participants....... In particular, the assumed (power) relations inherent in immigrant-oriented educational health interventions, in which immigrants are in a novice position, are challenged, as the immigrants are experienced adults (and parents) in regard to healthcare. The paper proposes that such unexpected conditions...

  2. A MODEL FOR ALIGNING SOFTWARE PROJECTS REQUIREMENTS WITH PROJECT TEAM MEMBERS REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Robert Hans

    2013-02-01

    Full Text Available The fast-paced, dynamic environment within which information and communication technology (ICT projects are run as well as ICT professionals’ constant changing requirements present a challenge for project managers in terms of aligning projects’ requirements with project team members’ requirements. This research paper purports that if projects’ requirements are properly aligned with team members’ requirements, then this will result in a balanced decision approach. Moreover, such an alignment will result in the realization of employee’s needs as well as meeting project’s needs. This paper presents a Project’s requirements and project Team members’ requirements (PrTr alignment model and argues that a balanced decision which meets both software project’s requirements and team members’ requirements can be achieved through the application of the PrTr alignment model.

  3. Critical appraisal of assumptions in chains of model calculations used to project local climate impacts for adaptation decision support—the case of Baakse Beek

    NARCIS (Netherlands)

    van der Sluijs, Jeroen; Wardekker, Arjan

    2015-01-01

    In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climat

  4. Supporting requirements model evolution throughout the system life-cycle

    OpenAIRE

    Ernst, Neil; Mylopoulos, John; Yu, Yijun; Ngyuen, Tien T.

    2008-01-01

    Requirements models are essential not just during system implementation, but also to manage system changes post-implementation. Such models should be supported by a requirements model management framework that allows users to create, manage and evolve models of domains, requirements, code and other design-time artifacts along with traceability links between their elements. We propose a comprehensive framework which delineates the operations and elements necessary, and then describe a tool imp...

  5. Climate change scenarios in Mexico from models results under the assumption of a doubling in the atmospheric CO{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, V.M.; Villanueva, E.E.; Garduno, R.; Adem, J. [Centro de Ciencias de la Atmosfera, Mexico (Mexico)

    1995-12-31

    General circulation models (GCMs) and energy balance models (EBMs) are the best way to simulate the complex large-scale dynamic and thermodynamic processes in the atmosphere. These models have been used to estimate the global warming due to an increase of atmospheric CO{sub 2}. In Japan Ohta with coworkers has developed a physical model based on the conservation of thermal energy applied to pounded shallow water, to compute the change in the water temperature, using the atmospheric warming and the precipitation due to the increase in the atmospheric CO{sub 2} computed by the GISS-GCM. In this work, a method similar to the Ohta`s one is used for computing the change in ground temperature, soil moisture, evaporation, runoff and dryness index in eleven hydrological zones, using in this case the surface air temperature and precipitation due to CO{sub 2} doubling, computed by the GFDLR30-GCM and the version of the Adem thermodynamic climate model (CTM-EBM), which contains the three feedbacks (cryosphere, clouds and water vapor), and does not include water vapor in the CO{sub 2} atmospheric spectral band (12-19{mu})

  6. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    Science.gov (United States)

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  7. Cognitive ageing on latent constructs for visual processing capacity: A novel Structural Equation Modelling framework with causal assumptions based on A Theory of Visual Attention

    Directory of Open Access Journals (Sweden)

    Simon eNielsen

    2015-01-01

    Full Text Available We examined the effects of normal ageing on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive ageing affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modelling (SEM; Model 2, informed by functional structures that were modelled with path analyses in SEM (Model 1. The results show that ageing effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM capacity (Model 2. These results are consistent with some studies reporting selective ageing effects on processing speed, and inconsistent with other studies reporting ageing effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive ageing effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  8. Requirements model for an e-Health awareness portal

    Science.gov (United States)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Nawi, Mohd Nasrun M.

    2016-08-01

    Requirements engineering is at the heart and foundation of software engineering process. Poor quality requirements inevitably lead to poor quality software solutions. Also, poor requirement modeling is tantamount to designing a poor quality product. So, quality assured requirements development collaborates fine with usable products in giving the software product the needed quality it demands. In the light of the foregoing, the requirements for an e-Ebola Awareness Portal were modeled with a good attention given to these software engineering concerns. The requirements for the e-Health Awareness Portal are modeled as a contribution to the fight against Ebola and helps in the fulfillment of the United Nation's Millennium Development Goal No. 6. In this study requirements were modeled using UML 2.0 modeling technique.

  9. Extending enterprise architecture modelling with business goals and requirements

    NARCIS (Netherlands)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; Sinderen, van Marten

    2011-01-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling te

  10. Informal Institutions,Consumption Custom,and Assumption of OLG Model-A Theoretical Analysis on Households' Consumption Decisions in the Oriental Culture and Belief

    Institute of Scientific and Technical Information of China (English)

    HUANG Shao'an; SUN Tao

    2006-01-01

    Orthodox consumption theories have not incorporated the overlapping-generations (OLG) model and wealth-stock model,whereas this article explains households' characters in consumption and savings in countries such as China and some other regions from the viewpoints of social convention,moral formation,ethics,and other informal institutions.The authors exploit and extend the OLG model,introduce the concepts of bequest,gift,and wealth preference to the economic agent's utility function,then apply optimal conditions to analyzing the characters and problems concerning consumption and savings behavior.Furthermore,they deliberate on the effects of this analysis on government macroeconomic policies and suggest some relevant theoretical thinking and solutions.

  11. The theory of reasoned action as a model of marijuana use: tests of implicit assumptions and applicability to high-risk young women.

    Science.gov (United States)

    Morrison, Diane M; Golder, Seana; Keller, Thomas E; Gillmore, Mary Rogers

    2002-09-01

    The theory of reasoned action (TRA) is used to model decisions about substance use among young mothers who became premaritally pregnant at age 17 or younger. The results of structural equation modeling to test the TRA indicated that most relationships specified by the model were significant and in the predicted direction. Attitude was a stronger predictor of intention than norm, but both were significantly related to intention, and intention was related to actual marijuana use 6 months later. Outcome beliefs were bidimensional, and positive outcome beliefs, but not negative beliefs, were significantly related to attitude. Prior marijuana use was only partially mediated by the TRA variables; it also was directly related to intentions to use marijuana and to subsequent use.

  12. Testing an assumption of the E-Z Reader model of eye-movement control during reading: Using event-related potentials to examine the familiarity check

    NARCIS (Netherlands)

    Reichle, E.D.; Tokowicz, N.; Liu, Y.; Perfetti, C.A.

    2011-01-01

    According to the E-Z Reader model of eye-movement control, the completion of an early stage of lexical processing, the familiarity check, causes the eyes to move forward during reading (Reichle, Pollatsek, Fisher, & Rayner, 1998). Here, we report an event-related potential (ERP

  13. Extending enterprise architecture modelling with business goals and requirements

    Science.gov (United States)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  14. Mixing Formal and Informal Model Elements for Tracing Requirements

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Ladenberger, Lukas

    2011-01-01

    a system for traceability with a state-based formal method that supports refinement. We do not require all specification elements to be modelled formally and support incremental incorporation of new specification elements into the formal model. Refinement is used to deal with larger amounts of requirements......Tracing between informal requirements and formal models is challenging. A method for such tracing should permit to deal efficiently with changes to both the requirements and the model. A particular challenge is posed by the persisting interplay of formal and informal elements. In this paper, we...

  15. Testing an assumption of the E-Z Reader model of eye-movement control during reading: using event-related potentials to examine the familiarity check.

    Science.gov (United States)

    Reichle, Erik D; Tokowicz, Natasha; Liu, Ying; Perfetti, Charles A

    2011-07-01

    According to the E-Z Reader model of eye-movement control, the completion of an early stage of lexical processing, the familiarity check, causes the eyes to move forward during reading (Reichle, Pollatsek, Fisher, & Rayner, 1998). Here, we report an event-related potential (ERP) experiment designed to examine the hypothesized familiarity check at the electrophysiological level. The results indicate ERP components modulated by word frequency at the time of the predicted familiarity check. These findings are consistent with the hypothesis that an early stage of lexical processing is linked to the "decisions" about when to move the eyes during reading. Copyright © 2011 Society for Psychophysiological Research.

  16. Using cognitive modeling for requirements engineering in anesthesiology

    NARCIS (Netherlands)

    Pott, C; le Feber, J

    2005-01-01

    Cognitive modeling is a complexity reducing method to describe significant cognitive processes under a specified research focus. Here, a cognitive process model for decision making in anesthesiology is presented and applied in requirements engineering. Three decision making situations of

  17. Determination of the optimal periodic maintenance policy under imperfect repair assumption

    OpenAIRE

    Maria Luiza Guerra de Toledo

    2014-01-01

    . An appropriate maintenance policy is essential to reduce expenses and risks related to repairable systems failures. The usual assumptions of minimal or perfect repair at failures are not suitable for many real systems, requiring the application of Imperfect Repair models. In this work, the classes Arithmetic Reduction of Age and Arithmetic Reduction of Intensity, proposed by Doyen and Gaudoin (2004) are explored. Likelihood functions for such models are derived, and the parameters are es...

  18. A prospective overview of the essential requirements in molecular modeling for nanomedicine design.

    Science.gov (United States)

    Kumar, Pradeep; Khan, Riaz A; Choonara, Yahya E; Pillay, Viness

    2013-05-01

    Nanotechnology has presented many new challenges and opportunities in the area of nanomedicine design. The issues related to nanoconjugation, nanosystem-mediated targeted drug delivery, transitional stability of nanovehicles, the integrity of drug transport, drug-delivery mechanisms and chemical structural design require a pre-estimated and determined course of assumptive actions with property and characteristic estimations for optimal nanomedicine design. Molecular modeling in nanomedicine encompasses these pre-estimations and predictions of pertinent design data via interactive computographic software. Recently, an increasing amount of research has been reported where specialized software is being developed and employed in an attempt to bridge the gap between drug discovery, materials science and biology. This review provides an assimilative and concise incursion into the current and future strategies of molecular-modeling applications in nanomedicine design and aims to describe the utilization of molecular models and theoretical-chemistry computographic techniques for expansive nanomedicine design and development.

  19. How Symmetrical Assumptions Advance Strategic Management Research

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Hallberg, Hallberg

    2014-01-01

    We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other appl...

  20. Software Requirements Specification Verifiable Fuel Cycle Simulation (VISION) Model

    Energy Technology Data Exchange (ETDEWEB)

    D. E. Shropshire; W. H. West

    2005-11-01

    The purpose of this Software Requirements Specification (SRS) is to define the top-level requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). This simulation model is intended to serve a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies.

  1. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  2. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Science.gov (United States)

    Caricchi, Luca; Simpson, Guy; Schaltegger, Urs

    2016-04-01

    Magma fluxes in the Earth's crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes). Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions. Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  3. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Directory of Open Access Journals (Sweden)

    Luca eCaricchi

    2016-04-01

    Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  4. A Critique of Uncriticized Assumptions.

    Science.gov (United States)

    Healy, Timony S.

    1980-01-01

    Liberal arts colleges are seen as engaged in moral education. Three moral lessons that a college teaches are described as (1) love for the truth, (2) learning is a human good, and (3) learning requires intellectual rigor or "discipline." Colleges are seen as places of hope. (MLW)

  5. Sensitivity of Forward Radiative Transfer Model on Spectroscopic Assumptions and Input Geophysical Parameters at 23.8 GHz and 183 GHz Channels and its Impact on Inter-calibration of Microwave Radiometers

    Science.gov (United States)

    Datta, S.; Jones, W. L.; Ebrahimi, H.; Chen, R.; Payne, V.; Kroodsma, R.

    2014-12-01

    The first step in radiometric inter-calibration is to ascertain the self-consistency and reasonableness of the observed brightness temperature (Tb) for each individual sensor involved. One of the widely used approaches is to compare the observed Tb with a simulated Tb using a forward radiative transfer model (RTM) and input geophysical parameters at the geographic location and time of the observation. In this study we intend to test the sensitivity of the RTM to uncertainties in the input geophysical parameters as well as to the underlying physical assumptions of gaseous absorption and surface emission in the RTM. SAPHIR, a cross track scanner onboard Indo-French Megha-Tropique Satellite, gives us a unique opportunity of studying 6 dual band 183 GHz channels at an inclined orbit over the Tropics for the first time. We will also perform the same sensitivity analysis using the Advance Technology Microwave Sounder (ATMS) 23 GHz and five 183 GHz channels. Preliminary analysis comparing GDAS and an independent retrieved profile show some sensitivity of the RTM to the input data. An extended analysis of this work using different input geophysical parameters will be presented. Two different absorption models, the Rosenkranz and the MonoRTM will be tested to analyze the sensitivity of the RTM to spectroscopic assumptions in each model. Also for the 23.8 GHz channel, the sensitivity of the RTM to the surface emissivity model will be checked. Finally the impact of these sensitivities on radiometric inter-calibration of radiometers at sounding frequencies will be assessed.

  6. Process Model for Defining Space Sensing and Situational Awareness Requirements

    Science.gov (United States)

    2006-04-01

    process model for defining systems for space sensing and space situational awareness is presented. The paper concentrates on eight steps for determining the requirements to include: decision maker needs, system requirements, exploitation methods and vulnerabilities, critical capabilities, and identify attack scenarios. Utilization of the USAF anti-tamper (AT) implementation process as a process model departure point for the space sensing and situational awareness (SSSA...is presented. The AT implementation process model , as an

  7. Exploring gravitational statistics not based on quantum dynamical assumptions

    CERN Document Server

    Mandrin, P A

    2016-01-01

    Despite considerable progress in several approaches to quantum gravity, there remain uncertainties on the conceptual level. One issue concerns the different roles played by space and time in the canonical quantum formalism. This issue occurs because the Hamilton-Jacobi dynamics is being quantised. The question then arises whether additional physically relevant states could exist which cannot be represented in the canonical form or as a partition function. For this reason, the author has explored a statistical approach (NDA) which is not based on quantum dynamical assumptions and does not require space-time splitting boundary conditions either. For dimension 3+1 and under thermal equilibrium, NDA simplifies to a path integral model. However, the general case of NDA cannot be written as a partition function. As a test of NDA, one recovers general relativity at low curvature and quantum field theory in the flat space-time approximation. Related paper: arxiv:1505.03719.

  8. Requirements for Logical Models for Value-Added Tax Legislation

    DEFF Research Database (Denmark)

    Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis

    -specific needs. Currently, these difficulties are handled in most major ERP systems by customising and localising the native code of the ERP systems for each specific country and industry. We propose an alternative that uses logical modeling of VAT legislation. The potential benefit is to eventually transform...... such a model automatically into programs that essentially will replace customisation and localisation by con¿guration by changing parameters in the model. In particular, we: (1) identify a number of requirements for such modeling, including requirements for the underlying logic; (2) model salient parts...

  9. Digital Avionics Information System (DAIS): Training Requirements Analysis Model (TRAMOD).

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    The training requirements analysis model (TRAMOD) described in this report represents an important portion of the larger effort called the Digital Avionics Information System (DAIS) Life Cycle Cost (LCC) Study. TRAMOD is the second of three models that comprise an LCC impact modeling system for use in the early stages of system development. As…

  10. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Eve...... requirements, where the system to be built must explicitly support the interaction between people within a pervasive cooperative workflow execution. A case study from a real project is used to illustrate the proposed approach.......Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Even...... with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  11. Beyond the crystal ball assumption

    DEFF Research Database (Denmark)

    Vaucouleur, Sebastien

    2008-01-01

    trades control for flexibility. Unfortunately, it also makes the customized software product very sensitive to upgrades. We propose a more mitigated solution, that does not require accurate anticipation and yet offers some resilience to evolution of the base software product through the use of code...... quantification. We introduce the Eggther framework for customization of evolvable software products in general and ERP systems in particular. Our approach is based on the concept of code query by example. The technology being developed is based on an initial empirical study on practices around ERP systems. We...... motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the upgrade problem....

  12. GENERAL REQUIREMENTS FOR SIMULATION MODELS IN WASTE MANAGEMENT

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Ian; Kossik, Rick; Voss, Charlie

    2003-02-27

    Most waste management activities are decided upon and carried out in a public or semi-public arena, typically involving the waste management organization, one or more regulators, and often other stakeholders and members of the public. In these environments, simulation modeling can be a powerful tool in reaching a consensus on the best path forward, but only if the models that are developed are understood and accepted by all of the parties involved. These requirements for understanding and acceptance of the models constrain the appropriate software and model development procedures that are employed. This paper discusses requirements for both simulation software and for the models that are developed using the software. Requirements for the software include transparency, accessibility, flexibility, extensibility, quality assurance, ability to do discrete and/or continuous simulation, and efficiency. Requirements for the models that are developed include traceability, transparency, credibility/validity, and quality control. The paper discusses these requirements with specific reference to the requirements for performance assessment models that are used for predicting the long-term safety of waste disposal facilities, such as the proposed Yucca Mountain repository.

  13. Requirements engineering for cross-sectional information chain models.

    Science.gov (United States)

    Hübner, U; Cruel, E; Gök, M; Garthaus, M; Zimansky, M; Remmers, H; Rienhoff, O

    2012-01-01

    Despite the wealth of literature on requirements engineering, little is known about engineering very generic, innovative and emerging requirements, such as those for cross-sectional information chains. The IKM health project aims at building information chain reference models for the care of patients with chronic wounds, cancer-related pain and back pain. Our question therefore was how to appropriately capture information and process requirements that are both generally applicable and practically useful. To this end, we started with recommendations from clinical guidelines and put them up for discussion in Delphi surveys and expert interviews. Despite the heterogeneity we encountered in all three methods, it was possible to obtain requirements suitable for building reference models. We evaluated three modelling languages and then chose to write the models in UML (class and activity diagrams). On the basis of the current project results, the pros and cons of our approach are discussed.

  14. Inferring Requirement Goals from Model Implementing in UML

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    UML is used widely in many software developmentprocesses.However,it does not make explicit requirement goals.Here is a method tending to establish the semantic relationship between requirements goals and UML models.Before the method is introduced,some relevant concepts are described

  15. Technological assumptions for biogas purification.

    Science.gov (United States)

    Makareviciene, Violeta; Sendzikiene, Egle

    2015-01-01

    Biogas can be used in the engines of transport vehicles and blended into natural gas networks, but it also requires the removal of carbon dioxide, hydrogen sulphide, and moisture. Biogas purification process flow diagrams have been developed for a process enabling the use of a dolomite suspension, as well as for solutions obtained by the filtration of the suspension, to obtain biogas free of hydrogen sulphide and with a carbon dioxide content that does not exceed 2%. The cost of biogas purification was evaluated on the basis of data on biogas production capacity and biogas production cost obtained from local water treatment facilities. It has been found that, with the use of dolomite suspension, the cost of biogas purification is approximately six times lower than that in the case of using a chemical sorbent such as monoethanolamine. The results showed travelling costs using biogas purified by dolomite suspension are nearly 1.5 time lower than travelling costs using gasoline and slightly lower than travelling costs using mineral diesel fuel.

  16. On the Necessary and Sufficient Assumptions for UC Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus; Orlandi, Claudio

    2010-01-01

    We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the uniform random string model and the key-registration authority model (KRA), and provide new results...... for all of them. Perhaps most interestingly we show that: •  For even the minimal meaningful KRA, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious......-transfer protocol for the stand-alone model. Since a KRA where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the KRA model: any non-trivial KRA is sufficient for UC computation. •  We show...

  17. Peacebuilding: assumptions, practices and critiques

    Directory of Open Access Journals (Sweden)

    Cravo, Teresa Almeida

    2017-05-01

    Full Text Available Peacebuilding has become a guiding principle of international intervention in the periphery since its inclusion in the Agenda for Peace of the United Nations in 1992. The aim of creating the conditions for a self-sustaining peace in order to prevent a return to armed conflict is, however, far from easy or consensual. The conception of liberal peace proved particularly limited, and inevitably controversial, and the reality of war-torn societies far more complex than anticipated by international actors that today assume activities in the promotion of peace in post-conflict contexts. With a trajectory full of contested successes and some glaring failures, the current model has been the target of harsh criticism and widespread scepticism. This article critically examines the theoretical background and practicalities of peacebuilding, exploring its ambition as well as the weaknesses of the paradigm adopted by the international community since the 1990s.

  18. Validation of Power Requirement Model for Active Loudspeakers

    DEFF Research Database (Denmark)

    Schneider, Henrik; Madsen, Anders Normann; Bjerregaard, Ruben

    2015-01-01

    The actual power requirement of an active loudspeaker during playback of music has not received much attention in the literature. This is probably because no single and simple solution exists and because a complete system knowledge from input voltage to output sound pressure level is required....... There are however many advantages that could be harvested from such knowledge like size, cost and efficiency improvements. In this paper a recently proposed power requirement model for active loudspeakers is experimentally validated and the model is expanded to include the closed and vented type enclosures...

  19. Towards a Formalized Ontology-Based Requirements Model

    Institute of Scientific and Technical Information of China (English)

    JIANG Dan-dong; ZHANG Shen-sheng; WANG Ying-lin

    2005-01-01

    The goal of this paper is to take a further step towards an ontological approach for representing requirements information. The motivation for ontologies was discussed. The definitions of ontology and requirements ontology were given. Then, it presented a collection of informal terms, including four subject areas. It also discussed the formalization process of ontology. The underlying meta-ontology was determined, and the formalized requirements ontology was analyzed. This formal ontology is built to serve as a basis for requirements model. Finally, the implementation of software system was given.

  20. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    Swarup Mohalik; R Ramanujam

    2002-04-01

    We propose a class of finite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the commitments offered by the other at that state. We model examples like reliable bit transmission and sequence transmission protocols in this framework and discuss how assumption-commitment structure facilitates compositional design of such protocols. We prove a decomposition theorem which states that every protocol specified globally as a finite state system can be decomposed into such an assumption compatible system. We also present a syntactic characterization of this class using top level parallel composition.

  1. A transformation approach for collaboration based requirement models

    CERN Document Server

    Harbouche, Ahmed; Mokhtari, Aicha

    2012-01-01

    Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).

  2. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  3. Innovative Product Design Based on Customer Requirement Weight Calculation Model

    Institute of Scientific and Technical Information of China (English)

    Chen-Guang Guo; Yong-Xian Liu; Shou-Ming Hou; Wei Wang

    2010-01-01

    In the processes of product innovation and design, it is important for the designers to find and capture customer's focus through customer requirement weight calculation and ranking. Based on the fuzzy set theory and Euclidean space distance, this paper puts forward a method for customer requirement weight calculation called Euclidean space distances weighting ranking method. This method is used in the fuzzy analytic hierarchy process that satisfies the additive consistent fuzzy matrix. A model for the weight calculation steps is constructed;meanwhile, a product innovation design module on the basis of the customer requirement weight calculation model is developed. Finally, combined with the instance of titanium sponge production, the customer requirement weight calculation model is validated. By the innovation design module, the structure of the titanium sponge reactor has been improved and made innovative.

  4. More Efficient VLR Group Signature Based on DTDH Assumption

    Directory of Open Access Journals (Sweden)

    Lizhen Ma

    2012-10-01

    Full Text Available In VLR (verifier-local revocation group signature, only verifiers are involved in the revocation of a member, while signers are not. Thus the VLR group signature schemes are suitable for mobile environments. To meet the requirement of speediness, reducing computation costs and shortening signature length are two requirements at the current research of VLR group signatures. A new VLR group signature is proposed based on q-SDH assumption and DTDH assumption. Compared with the existing VLR group signatures based on DTDH assumption, the  proposed scheme not only has the shortest signature size, but also has the lowest computation costs , and can be applicable to mobile environments such as IEEE 802.1x.  

  5. Uncertainty in deterministic groundwater transport models due to the assumption of macrodispersive mixing: evidence from the Cape Cod (Massachusetts, U.S.A.) and Borden (Ontario, Canada) tracer tests

    Science.gov (United States)

    Fitts, Charles R.

    1996-06-01

    concentration at any given time results in a standard deviation of ∼0.12 in the statistic log(ca(max)/cm(max)) for both tests. Although the uncertainties listed above pertain to the scales of un-modeled velocity variation in these models at these sites, the reported uncertainties could serve as lower bound estimates for most deterministic model applications. Uncertainty due to the assumption of macrodispersive mixing tends to increase as the plume scale decreases or as the scale of un-modeled velocity field variations increases.

  6. Evaluation of Foreign Exchange Risk Capital Requirement Models

    Directory of Open Access Journals (Sweden)

    Ricardo S. Maia Clemente

    2005-12-01

    Full Text Available This paper examines capital requirement for financial institutions in order to cover market risk stemming from exposure to foreign currencies. The models examined belong to two groups according to the approach involved: standardized and internal models. In the first group, we study the Basel model and the model adopted by the Brazilian legislation. In the second group, we consider the models based on the concept of value at risk (VaR. We analyze the single and the double-window historical model, the exponential smoothing model (EWMA and a hybrid approach that combines features of both models. The results suggest that the Basel model is inadequate to the Brazilian market, exhibiting a large number of exceptions. The model of the Brazilian legislation has no exceptions, though generating higher capital requirements than other internal models based on VaR. In general, VaR-based models perform better and result in less capital allocation than the standardized approach model applied in Brazil.

  7. The Self in Guidance: Assumptions and Challenges.

    Science.gov (United States)

    Edwards, Richard; Payne, John

    1997-01-01

    Examines the assumptions of "self" made in the professional and managerial discourses of guidance. Suggests that these assumptions obstruct the capacity of guidance workers to explain their own practices. Drawing on contemporary debates over identity, modernity, and postmodernity, argues for a more explicit debate about the self in guidance. (RJM)

  8. Wrong assumptions in the financial crisis

    NARCIS (Netherlands)

    Aalbers, M.B.

    2009-01-01

    Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to leve

  9. The Benefit of Ambiguity in Understanding Goals in Requirements Modelling

    DEFF Research Database (Denmark)

    Paay, Jeni; Pedell, Sonja; Sterling, Leon

    2011-01-01

    This paper examines the benefit of ambiguity in describing goals in requirements modelling for the design of socio-technical systems using concepts from Agent-Oriented Software Engineering (AOSE) and ethnographic and cultural probe methods from Human Computer Interaction (HCI). The authors’ aim...... a holistic approach to eliciting, analyzing, and modelling socially-oriented requirements by combining a particular form of ethnographic technique, cultural probes, with Agent Oriented Software Engineering notations to model these requirements. This paper focuses on examining the value of maintaining...... of their research is to create technologies that support more flexible and meaningful social interactions, by combining best practice in software engineering with ethnographic techniques to model complex social interactions from their socially oriented life for the purposes of building rich socio...

  10. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  11. The Spin-Charge-Family theory offers the explanation for all the assumptions of the Standard model, for the Dark matter, for the Matter-antimatter asymmetry, making several predictions

    CERN Document Server

    Borštnik, Norma Susana Mankoč

    2016-01-01

    The spin-charge-family theory, which is a kind of the Kaluza-Klein theories but with fermions carrying two kinds of spins (no charges), offers the explanation for all the assumptions of the standard model, with the origin of families, the higgs and the Yukawa couplings included. It offers the explanation also for other phenomena, like the origin of the dark matter and of the matter/antimatter asymmetry in the universe. It predicts the existence of the fourth family to the observed three, as well as several scalar fields with the weak and the hyper charge of the standard model higgs ($\\pm \\frac{1}{2}, \\mp \\frac{1}{2}$, respectively), which determine the mass matrices of family members, offering an explanation, why the fourth family with the masses above $1$ TeV contributes weakly to the gluon-fusion production of the observed higgs and to its decay into two photons, and predicting that the two photons events, observed at the LHC at $\\approx 750$ GeV, might be an indication for the existence of one of several s...

  12. A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed Harbouche

    2012-02-01

    Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.

  13. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  14. Formal Requirements Modeling for Reactive Systems with Coloured Petri Nets

    DEFF Research Database (Denmark)

    Tjell, Simon

    This dissertation presents the contributions of seven publications all concerned with the application of Coloured Petri Nets (CPN) to requirements modeling for reactive systems. The publications are introduced along with relevant background material and related work, and their contributions...... interface composed of recognizable artifacts and activities. The presentation of the three publications related to Use Cases is followed by a the presentation of a publication formalizing some of the guidelines applied for structuring the CPN requirements models|namely the guidelines that make it possible...... activity. The traces are automatically recorded during execution of the model. The second publication presents a formally specified framework for automating a large part of the tasks related to integrating Problem Frames with CPN. The framework is specified in VDM++, and allows the modeler to automatically...

  15. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    Science.gov (United States)

    Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2013-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  16. Single High Fidelity Geometric Data Sets for LCM - Model Requirements

    Science.gov (United States)

    2006-11-01

    material name (example, an HY80 steel ) plus additional material requirements (heat treatment, etc.) Creation of a more detailed description of the data...57 Figure 2.22. Typical Stress-Strain Curve for Steel (adapted from Ref 59) .............................. 60 Figure...structures are steel , aluminum and composites. The structural components that make up a global FEA model drive the fidelity of the model. For example

  17. ZNJPrice/Earnings Ratio Model through Dividend Yield and Required Yield Above Expected Inflation

    Directory of Open Access Journals (Sweden)

    Emil Mihalina

    2010-07-01

    Full Text Available Price/earnings ratio is the most popular and most widespread evaluation model used to assess relative capital asset value on financial markets. In functional terms, company earnings in the very long term can be described with high significance. Empirically, it is visible from long-term statistics that the demanded (required yield on capital markets has certain regularity. Thus, investors first require a yield above the stable inflation rate and then a dividend yield and a capital increase caused by the growth of earnings that influence the price, with the assumption that the P/E ratio is stable. By combining the Gordon model for current dividend value, the model of market capitalization of earnings (price/earnings ratio and bearing in mind the influence of the general price levels on company earnings, it is possible to adjust the price/earnings ratio by deriving a function of the required yield on capital markets measured by a market index through dividend yield and inflation rate above the stable inflation rate increased by profit growth. The S&P 500 index for example, has in the last 100 years grown by exactly the inflation rate above the stable inflation rate increased by profit growth. The comparison of two series of price/earnings ratios, a modelled one and an average 7-year ratio, shows a notable correlation in the movement of two series of variables, with a three year deviation. Therefore, it could be hypothesized that three years of the expected inflation level, dividend yield and profit growth rate of the market index are discounted in the current market prices. The conclusion is that, at the present time, the relationship between the adjusted average price/earnings ratio and its effect on the market index on one hand and the modelled price/earnings ratio on the other can clearly show the expected dynamics and course in the following period.

  18. A Cmparison of Closed World Assumptions

    Institute of Scientific and Technical Information of China (English)

    沈一栋

    1992-01-01

    In this Paper.we introduce a notion of the family of closed world assumptions and compare several well-known closed world approaches in the family to the extent to whic an incomplete database is com pleted.

  19. Requirements for a next generation global flood inundation models

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Smith, A.; Sampson, C. C.

    2016-12-01

    In this paper we review the current status of global hydrodynamic models for flood inundation prediction and highlight recent successes and current limitations. Building on this analysis we then go on to consider what is required to develop the next generation of such schemes and show that to achieve this a number of fundamental science problems will need to be overcome. New data sets and new types of analysis will be required, and we show that these will only partially be met by currently planned satellite missions and data collection initiatives. A particular example is the quality of available global Digital Elevation data. The current best data set for flood modelling, SRTM, is only available at a relatively modest 30m resolution, contains pixel-to-pixel noise of 6m and is corrupted by surface artefacts. Creative processing techniques have sought to address these issues with some success, but fundamentally the quality of the available global terrain data limits flood modelling and needs to be overcome. Similar arguments can be made for many other elements of global hydrodynamic models including their bathymetry data, boundary conditions, flood defence information and model validation data. We therefore systematically review each component of global flood models and document whether planned new technology will solve current limitations and, if not, what exactly will be required to do so.

  20. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  1. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  2. Models of protein and amino acid requirements for cattle

    Directory of Open Access Journals (Sweden)

    Luis Orlindo Tedeschi

    2015-03-01

    Full Text Available Protein supply and requirements by ruminants have been studied for more than a century. These studies led to the accumulation of lots of scientific information about digestion and metabolism of protein by ruminants as well as the characterization of the dietary protein in order to maximize animal performance. During the 1980s and 1990s, when computers became more accessible and powerful, scientists began to conceptualize and develop mathematical nutrition models, and to program them into computers to assist with ration balancing and formulation for domesticated ruminants, specifically dairy and beef cattle. The most commonly known nutrition models developed during this period were the National Research Council (NRC in the United States, Agricultural Research Council (ARC in the United Kingdom, Institut National de la Recherche Agronomique (INRA in France, and the Commonwealth Scientific and Industrial Research Organization (CSIRO in Australia. Others were derivative works from these models with different degrees of modifications in the supply or requirement calculations, and the modeling nature (e.g., static or dynamic, mechanistic, or deterministic. Circa 1990s, most models adopted the metabolizable protein (MP system over the crude protein (CP and digestible CP systems to estimate supply of MP and the factorial system to calculate MP required by the animal. The MP system included two portions of protein (i.e., the rumen-undegraded dietary CP - RUP - and the contributions of microbial CP - MCP as the main sources of MP for the animal. Some models would explicitly account for the impact of dry matter intake (DMI on the MP required for maintenance (MPm; e.g., Cornell Net Carbohydrate and Protein System - CNCPS, the Dutch system - DVE/OEB, while others would simply account for scurf, urinary, metabolic fecal, and endogenous contributions independently of DMI. All models included milk yield and its components in estimating MP required for lactation

  3. Modeling requirements for in situ vitrification. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom.

  4. Required experimental accuracy to select between supersymmetrical models

    Indian Academy of Sciences (India)

    David Grellscheid

    2004-03-01

    We will present a method to decide a priori whether various supersymmetrical scenarios can be distinguished based on sparticle mass data alone. For each model, a scan over all free SUSY breaking parameters reveals the extent of that model's physically allowed region of sparticle-mass-space. Based on the geometrical configuration of these regions in mass-space, it is possible to obtain an estimate of the required accuracy of future sparticle mass measurements to distinguish between the models. We will illustrate this algorithm with an example. Ths talk is based on work done in collaboration with B C Allanach (LAPTH, Annecy) and F Quevedo (DAMTP, Cambridge).

  5. Shattering world assumptions: A prospective view of the impact of adverse events on world assumptions.

    Science.gov (United States)

    Schuler, Eric R; Boals, Adriel

    2016-05-01

    Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record

  6. Thermodynamic models for bounding pressurant mass requirements of cryogenic tanks

    Science.gov (United States)

    Vandresar, Neil T.; Haberbusch, Mark S.

    1994-01-01

    Thermodynamic models have been formulated to predict lower and upper bounds for the mass of pressurant gas required to pressurize a cryogenic tank and then expel liquid from the tank. Limiting conditions are based on either thermal equilibrium or zero energy exchange between the pressurant gas and initial tank contents. The models are independent of gravity level and allow specification of autogenous or non-condensible pressurants. Partial liquid fill levels may be specified for initial and final conditions. Model predictions are shown to successfully bound results from limited normal-gravity tests with condensable and non-condensable pressurant gases. Representative maximum collapse factor maps are presented for liquid hydrogen to show the effects of initial and final fill level on the range of pressurant gas requirements. Maximum collapse factors occur for partial expulsions with large final liquid fill fractions.

  7. Model Waveform Accuracy Requirements for the $\\chi^2$ Discriminator

    CERN Document Server

    Lindblom, Lee

    2016-01-01

    This paper derives accuracy standards for model gravitational waveforms required to ensure proper use of the $\\chi^2$ discriminator test in gravitational wave (GW) data analysis. These standards are different from previously established requirements for detection and waveform parameter measurement based on signal-to-noise optimization. We present convenient formulae both for evaluating and interpreting the contribution of model errors to measured $\\chi^2$ values. Motivated by these formula, we also present an enhanced, complexified variant of the standard $\\chi^2$ statistic used in GW searches. While our results are not directly relevant to current searches (which use the $\\chi^2$ test only to veto signal candidates with extremely high $\\chi^2$ values), they could be useful in future GW searches and as figures of merit for model gravitational waveforms.

  8. A commuting generation model requiring only aggregated data

    CERN Document Server

    Lenormand, Maxime; Gargiulo, Floriana

    2011-01-01

    We recently proposed, in (Gargiulo et al., 2011), an innova tive stochastic model with only one parameter to calibrate. It reproduces the complete network by an iterative process stochastically choosing, for each commuter living in the municipality of a region, a workplace in the region. The choice is done considering the job offer in each municipality of the region and the distance to all the possible destinations. The model is quite effective if the region is sufficiently autonomous in terms of job offers. However, calibrating or being sure of this autonomy require data or expertise which are not necessarily available. Moreover the region can be not autonomous. In the present, we overcome these limitations, extending the job search geographical base of the commuters to the outside of the region, and changing the deterrence function form. We also found a law to calibrate the improvement model which does not require data.

  9. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  10. Mathematical Modeling of Programmatic Requirements for Yaws Eradication

    Science.gov (United States)

    Mitjà, Oriol; Fitzpatrick, Christopher; Asiedu, Kingsley; Solomon, Anthony W.; Mabey, David C.W.; Funk, Sebastian

    2017-01-01

    Yaws is targeted for eradication by 2020. The mainstay of the eradication strategy is mass treatment followed by case finding. Modeling has been used to inform programmatic requirements for other neglected tropical diseases and could provide insights into yaws eradication. We developed a model of yaws transmission varying the coverage and number of rounds of treatment. The estimated number of cases arising from an index case (basic reproduction number [R0]) ranged from 1.08 to 3.32. To have 80% probability of achieving eradication, 8 rounds of treatment with 80% coverage were required at low estimates of R0 (1.45). This requirement increased to 95% at high estimates of R0 (2.47). Extending the treatment interval to 12 months increased requirements at all estimates of R0. At high estimates of R0 with 12 monthly rounds of treatment, no combination of variables achieved eradication. Models should be used to guide the scale-up of yaws eradication. PMID:27983500

  11. The Benefit of Ambiguity in Understanding Goals in Requirements Modelling

    DEFF Research Database (Denmark)

    Paay, Jeni; Pedell, Sonja; Sterling, Leon

    2011-01-01

    of their research is to create technologies that support more flexible and meaningful social interactions, by combining best practice in software engineering with ethnographic techniques to model complex social interactions from their socially oriented life for the purposes of building rich socio......This paper examines the benefit of ambiguity in describing goals in requirements modelling for the design of socio-technical systems using concepts from Agent-Oriented Software Engineering (AOSE) and ethnographic and cultural probe methods from Human Computer Interaction (HCI). The authors’ aim...... of abstraction, ambiguous and open for conversations through the modelling process add richness to goal models, and communicate quality attributes of the interaction being modelled to the design phase, where this ambiguity is regarded as a resource for design....

  12. On assumption in low-altitude investigation of dayside magnetospheric phenomena

    Science.gov (United States)

    Koskinen, H. E. J.

    In the physics of large-scale phenomena in complicated media, such as space plasmas, the chain of reasoning from the fundamental physics to conceptual models is a long and winding road, requiring much physical insight and reliance on various assumptions and approximations. The low-altitude investigation of dayside phenomena provides numerous examples of problems arising from the necessity to make strong assumptions. In this paper we discuss some important assumptions that are either unavoidable or at least widely used. Two examples are the concepts of frozen-in field lines and convection velocity. Instead of asking what violates the frozen-in condition, it is quite legitimate to ask what freezes the plasma and the magnetic field in the first place. Another important complex of problems are the limitations introduced by a two-dimensional approach or linearization of equations. Although modern research is more and more moving toward three-dimensional and time-dependent models, limitations in computing power often make a two-dimensional approach tempting. In a similar way, linearization makes equations analytically tractable. Finally, a very central question is the mapping. In the first approximation, the entire dayside magnetopause maps down to the ionosphere through the dayside cusp region. From the mapping viewpoint, the cusp is one of the most difficult regions and assumptions needed to perform the mapping in practice must be considered with the greatest possible care. We can never avoid assumptions but we must always make them clear to ourselves and also to the readers of our papers.

  13. Information Models, Data Requirements, and Agile Data Curation

    Science.gov (United States)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  14. DDH-like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2011-01-01

    DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...

  15. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  16. Statistical framework for evaluation of climate model simulations by use of climate proxy data from the last millennium – Part 3: Practical considerations, relaxed assumptions, and using tree-ring data to address the amplitude of solar forcing

    Directory of Open Access Journals (Sweden)

    A. Moberg

    2014-06-01

    Full Text Available Practical issues arise when applying a statistical framework for unbiased ranking of alternative forced climate model simulations by comparison with climate observations from instrumental and proxy data (Part 1 in this series. Given a set of model and observational data, several decisions need to be made; e.g. concerning the region that each proxy series represents, the weighting of different regions, and the time resolution to use in the analysis. Objective selection criteria cannot be made here, but we argue to study how sensitive the results are to the choices made. The framework is improved by the relaxation of two assumptions; to allow autocorrelation in the statistical model for simulated climate variability, and to enable direct comparison of alternative simulations to test if any of them fit the observations significantly better. The extended framework is applied to a set of simulations driven with forcings for the pre-industrial period 1000–1849 CE and fifteen tree-ring based temperature proxy series. Simulations run with only one external forcing (land-use, volcanic, small-amplitude solar, or large-amplitude solar, do not significantly capture the variability in the tree-ring data – although the simulation with volcanic forcing does so for some experiment settings. When all forcings are combined (using either the small- or large-amplitude solar forcing including also orbital, greenhouse-gas and non-volcanic aerosol forcing, and additionally used to produce small simulation ensembles starting from slightly different initial ocean conditions, the resulting simulations are highly capable of capturing some observed variability. Nevertheless, for some choices in the experiment design, they are not significantly closer to the observations than when unforced simulations are used, due to highly variable results between regions. It is also not possible to tell whether the small-amplitude or large-amplitude solar forcing causes the multiple

  17. The Manifestations of Positive Leadership Strategies in the Doctrinal Assumptions of the U.S. Army Leadership Concept

    Directory of Open Access Journals (Sweden)

    Andrzej Lis

    2015-06-01

    Full Text Available The aim of the paper is to identify the manifestations of positive leadership strategies in the doctrinal assumptions of the U.S. Army leadership concept. The components of the U.S. Army leadership requirements model are be tested against the Cameron’s (2012 model of positive leadership strategies including: building a positive work climate; fostering positive relationships among the members of an organisation; establishing and promoting positive communication and manifesting the meaningfulness of work.

  18. A Model for Forecasting Enlisted Student IA Billet Requirements

    Science.gov (United States)

    2016-03-01

    were promised and had at least one course failure . Training times Student execution depends on TTT. TTT includes under-instruction (UI) time and...Cleared for Public Release A Model for Forecasting Enlisted Student IA Billet Requirements Steven W. Belcher with David L. Reese...and Kletus S. Lawler March 2016 Copyright © 2016 CNA This document contains the best opinion of CNA at the time of issue. It does

  19. Analysis of one assumption of the Navier-Stokes equations

    CERN Document Server

    Budarin, V A

    2013-01-01

    This article analyses the assumptions regarding the influence of pressure forces during the calculation of the motion of a Newtonian fluid. The purpose of the analysis is to determine the reasonableness of the assumptions and their impact on the results of the analytical calculation. The connections between equations, causes of discrepancies in exact solutions of the Navier-Stokes equations at low Reynolds numbers and the emergence of unstable solutions using computer programs are also addressed. The necessity to complement the well-known equations of motion in mechanical stress requires other equations are substantive. It is shown that there are three methods of solving such a problem and the requirements for the unknown equations are described. Keywords: Navier-Stokes, approximate equation, closing equations, holonomic system.

  20. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    Science.gov (United States)

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  1. Causal Mediation Analysis: Warning! Assumptions Ahead

    Science.gov (United States)

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  2. Mexican-American Cultural Assumptions and Implications.

    Science.gov (United States)

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…

  3. The homogeneous marginal utility of income assumption

    NARCIS (Netherlands)

    Demuynck, T.

    2015-01-01

    We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and facilit

  4. Mexican-American Cultural Assumptions and Implications.

    Science.gov (United States)

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…

  5. Culturally Biased Assumptions in Counseling Psychology

    Science.gov (United States)

    Pedersen, Paul B.

    2003-01-01

    Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…

  6. Extracurricular Business Planning Competitions: Challenging the Assumptions

    Science.gov (United States)

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  7. Leakage-Resilient Circuits without Computational Assumptions

    DEFF Research Database (Denmark)

    Dziembowski, Stefan; Faust, Sebastian

    2012-01-01

    Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage into the ......Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage...... into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied...

  8. SMV model-based safety analysis of software requirements

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Seong, Poong Hyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)], E-mail: phseong@kaist.ac.kr

    2009-02-15

    Fault tree analysis (FTA) is one of the most frequently applied safety analysis techniques when developing safety-critical industrial systems such as software-based emergency shutdown systems of nuclear power plants and has been used for safety analysis of software requirements in the nuclear industry. However, the conventional method for safety analysis of software requirements has several problems in terms of correctness and efficiency; the fault tree generated from natural language specifications may contain flaws or errors while the manual work of safety verification is very labor-intensive and time-consuming. In this paper, we propose a new approach to resolve problems of the conventional method; we generate a fault tree from a symbolic model verifier (SMV) model, not from natural language specifications, and verify safety properties automatically, not manually, by a model checker SMV. To demonstrate the feasibility of this approach, we applied it to shutdown system 2 (SDS2) of Wolsong nuclear power plant (NPP). In spite of subtle ambiguities present in the approach, the results of this case study demonstrate its overall feasibility and effectiveness.

  9. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    Science.gov (United States)

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department.

  10. User requirements for hydrological models with remote sensing input

    Energy Technology Data Exchange (ETDEWEB)

    Kolberg, Sjur

    1997-10-01

    Monitoring the seasonal snow cover is important for several purposes. This report describes user requirements for hydrological models utilizing remotely sensed snow data. The information is mainly provided by operational users through a questionnaire. The report is primarily intended as a basis for other work packages within the Snow Tools project which aim at developing new remote sensing products for use in hydrological models. The HBV model is the only model mentioned by users in the questionnaire. It is widely used in Northern Scandinavia and Finland, in the fields of hydroelectric power production, flood forecasting and general monitoring of water resources. The current implementation of HBV is not based on remotely sensed data. Even the presently used HBV implementation may benefit from remotely sensed data. However, several improvements can be made to hydrological models to include remotely sensed snow data. Among these the most important are a distributed version, a more physical approach to the snow depletion curve, and a way to combine data from several sources. 1 ref.

  11. Life sciences research in space: The requirement for animal models

    Science.gov (United States)

    Fuller, C. A.; Philips, R. W.; Ballard, R. W.

    1987-01-01

    Use of animals in NASA space programs is reviewed. Animals are needed because life science experimentation frequently requires long-term controlled exposure to environments, statistical validation, invasive instrumentation or biological tissue sampling, tissue destruction, exposure to dangerous or unknown agents, or sacrifice of the subject. The availability and use of human subjects inflight is complicated by the multiple needs and demands upon crew time. Because only living organisms can sense, integrate and respond to the environment around them, the sole use of tissue culture and computer models is insufficient for understanding the influence of the space environment on intact organisms. Equipment for spaceborne experiments with animals is described.

  12. Specification of advanced safety modeling requirements (Rev. 0).

    Energy Technology Data Exchange (ETDEWEB)

    Fanning, T. H.; Tautges, T. J.

    2008-06-30

    The U.S. Department of Energy's Global Nuclear Energy Partnership has lead to renewed interest in liquid-metal-cooled fast reactors for the purpose of closing the nuclear fuel cycle and making more efficient use of future repository capacity. However, the U.S. has not designed or constructed a fast reactor in nearly 30 years. Accurate, high-fidelity, whole-plant dynamics safety simulations will play a crucial role by providing confidence that component and system designs will satisfy established design limits and safety margins under a wide variety of operational, design basis, and beyond design basis transient conditions. Current modeling capabilities for fast reactor safety analyses have resulted from several hundred person-years of code development effort supported by experimental validation. The broad spectrum of mechanistic and phenomenological models that have been developed represent an enormous amount of institutional knowledge that needs to be maintained. Complicating this, the existing code architectures for safety modeling evolved from programming practices of the 1970s. This has lead to monolithic applications with interdependent data models which require significant knowledge of the complexities of the entire code in order for each component to be maintained. In order to develop an advanced fast reactor safety modeling capability, the limitations of the existing code architecture must be overcome while preserving the capabilities that already exist. To accomplish this, a set of advanced safety modeling requirements is defined, based on modern programming practices, that focuses on modular development within a flexible coupling framework. An approach for integrating the existing capabilities of the SAS4A/SASSYS-1 fast reactor safety analysis code into the SHARP framework is provided in order to preserve existing capabilities while providing a smooth transition to advanced modeling capabilities. In doing this, the advanced fast reactor safety models

  13. Modeling of Testability Requirement Based on Generalized Stochastic Petri Nets

    Institute of Scientific and Technical Information of China (English)

    SU Yong-ding; QIU Jing; LIU Guan-jun; QIAN Yan-ling

    2009-01-01

    Testability design is an effective way to realize the fault detection and isolation. Its important step is to determine testability figures of merits (TFOM). Firstly, some influence factors for TFOMs are analyzed, such as the processes of system operation, maintenance and support, fault detection and isolation and so on. Secondly, a testability requirement analysis model is built based on generalized stochastic Petri net (GSPN). Then, the system's reachable states are analyzed based on the model, a Markov chain isomorphic with Petri net is constructed, a state transition matrix is created and the system's steady state probability is obtained. The relationship between the steady state availability and testability parameters can be revealed and reasoned. Finally, an example shows that the proposed method can determine TFOM, such as fault detection rate and fault isolation rate, effectively and reasonably.

  14. Model Penentuan Nilai Target Functional Requirement Berbasis Utilitas

    Directory of Open Access Journals (Sweden)

    Cucuk Nur Rosyidi

    2012-01-01

    Full Text Available In a product design and development process, a designer faces a problem to decide functional requirement (FR target values. That decision is made under a risk since it is conducted in the early design phase using incomplete information. Utility function can be used to reflect the decision maker attitude towards the risk in making such decision. In this research, we develop a utility-based model to determine FR target values using quadratic utility function and information from Quality Function Deployment (QFD. A pencil design is used as a numerical example using quadratic utility function for each FR. The model can be applied for balancing customer and designer interest in determining FR target values.

  15. Some Considerations on the Basic Assumptions in Rotordynamics

    Science.gov (United States)

    GENTA, G.; DELPRETE, C.; BRUSA, E.

    1999-10-01

    The dynamic study of rotors is usually performed under a number of assumptions, namely small displacements and rotations, small unbalance and constant angular velocity. The latter assumption can be substituted by a known time history of the spin speed. The present paper develops a general non-linear model which can be used to study the rotordynamic behaviour of both fixed and free rotors without resorting to the mentioned assumptions and compares the results obtained from a number of non-linear numerical simulations with those computed through the usual linearized approach. It is so possible to verify that the validity of the rotordynamic models extends to situations in which fairly large unbalances and whirling motions are present and, above all, it is shown that the doubts forwarded about the application of a model which is based on constant spin speed to the case of free rotors in which the angular momentum is constant have no ground. Rotordynamic models can thus be used to study the stability in the small of spinning spacecrafts and the insight obtained from the study of rotors is useful to understand their attitude dynamics and its interactions with the vibration dynamics.

  16. The OPERA hypothesis: assumptions and clarifications.

    Science.gov (United States)

    Patel, Aniruddh D

    2012-04-01

    Recent research suggests that musical training enhances the neural encoding of speech. Why would musical training have this effect? The OPERA hypothesis proposes an answer on the basis of the idea that musical training demands greater precision in certain aspects of auditory processing than does ordinary speech perception. This paper presents two assumptions underlying this idea, as well as two clarifications, and suggests directions for future research.

  17. On distributional assumptions and whitened cosine similarities

    DEFF Research Database (Denmark)

    Loog, Marco

    2008-01-01

    Recently, an interpretation of the whitened cosine similarity measure as a Bayes decision rule was proposed (C. Liu, "The Bayes Decision Rule Induced Similarity Measures,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1086-1090, June 2007. This communication makes th...... the observation that some of the distributional assumptions made to derive this measure are very restrictive and, considered simultaneously, even inconsistent....

  18. How to Handle Assumptions in Synthesis

    Directory of Open Access Journals (Sweden)

    Roderick Bloem

    2014-07-01

    Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

  19. Requirements for high level models supporting design space exploration in model-based systems engineering

    NARCIS (Netherlands)

    Haveman, Steven; Bonnema, Gerrit Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during

  20. A GENERALIZATION OF TRADITIONAL KANO MODEL FOR CUSTOMER REQUIREMENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Renáta Turisová

    2015-07-01

    Full Text Available Purpose: The theory of attractiveness determines the relationship between the technically achieved and customer perceived quality of product attributes. The most frequently used approach in the theory of attractiveness is the implementation of Kano‘s model. There exist a lot of generalizations of that model which take into consideration various aspects and approaches focused on understanding the customer preferences and identification of his priorities for a selling  product. The aim of this article is to outline another possible generalization of Kano‘s model.Methodology/Approach: The traditional Kano’s model captures the nonlinear relationship between reached attributes of quality and customer requirements. The individual attributes of quality are divided into three main categories: must-be, one-dimensional, attractive quality and into two side categories: indifferent and reverse quality. The well selling product has to contain the must-be attribute. It should contain as many one-dimensional attributes as possible. If there are also supplementary attractive attributes, it means that attractiveness of the entire product, from the viewpoint of the customer, nonlinearly sharply rises what has a direct positive impact on a decision of potential customer when purchasing the product. In this article, we show that inclusion of individual quality attributes of a product to the mentioned categories depends, among other things, also on costs on life cycle of the product, respectively on a price of the product on the market.Findings: In practice, we are often encountering the inclusion of products into different price categories: lower, middle and upper class. For a certain type of products the category is either directly declared by a producer (especially in automotive industry, or is determined by a customer by means of assessment of available market prices. To each of those groups of a products different customer expectations can be assigned

  1. Maximizing the Delivery of MPR Broadcasting Under Realistic Physical Layer Assumptions

    Institute of Scientific and Technical Information of China (English)

    Francois Ingelrest; David Simplot-Ryl

    2008-01-01

    It is now commonly accepted that the unit disk graph used to model the physical layer in wireless network sdoes not reflect real radio transmissions, and that a more realistic model should be considered for experimental simulations.Previous work on realistic scenarios has been focused on unicast, however broadcast requirements are fundamentally different and cannot be derived from the unicast case. There fore, the broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay broadcast protocol (MPR), in which each node has to choose a set of 1-hop neighbors to act as relays in order to cover the whole 2-hop neighborhood. We giveexperimental results showing that the original strategy used to select these multipoint relays does not suit a realistic model.On the basis of these results, we propose new selection strategies solely based on link quality. One of the key aspects of our solutions is that our strategies do not require any additional hardware and may be implemented at the application layer,which is particularly relevant to the context of ad hoc and sensor networks where energy savings are mandatory. We finall yprovide new experimental results that demonstrate the superiority of our strategies under realistic physical assumptions.

  2. Relaxing the zero-sum assumption in neutral biodiversity theory.

    Science.gov (United States)

    Haegeman, Bart; Etienne, Rampal S

    2008-05-21

    The zero-sum assumption is one of the ingredients of the standard neutral model of biodiversity by Hubbell. It states that the community is saturated all the time, which in this model means that the total number of individuals in the community is constant over time, and therefore introduces a coupling between species abundances. It was shown recently that a neutral model with independent species, and thus without any coupling between species abundances, has the same sampling formula (given a fixed number of individuals in the sample) as the standard model [Etienne, R.S., Alonso, D., McKane, A.J., 2007. The zero-sum assumption in neutral biodiversity theory. J. Theor. Biol. 248, 522-536]. The equilibria of both models are therefore equivalent from a practical point of view. Here we show that this equivalence can be extended to a class of neutral models with density-dependence on the community-level. This result can be interpreted as robustness of the model, i.e. insensitivity of the model to the precise interaction of the species in a neutral community. It can also be interpreted as a lack of resolution, as different mechanisms of interactions between neutral species cannot be distinguished using only a single snapshot of species abundance data.

  3. Closed World Assumption for Disjunctive Reasoning

    Institute of Scientific and Technical Information of China (English)

    WANG Kewen; ZHOU Lizhu

    2001-01-01

    In this paper, the relationship between argumentation and closed world reasoning for disjunctive information is studied. In particular, the authors propose a simple and intuitive generalization of the closed world assumption (CWA) for general disjunctive deductive databases (with default negation). This semantics,called DCWA, allows a natural argumentation-based interpretation and can be used to represent reasoning for disjunctive information. We compare DCWA with GCWA and prove that DCWA extends Minker's GCWA to the class of disjunctive databases with default negation. Also we compare our semantics with some related approaches.In addition, the computational complexity of DCWA is investigated.

  4. LSST camera heat requirements using CFD and thermal seeing modeling

    Science.gov (United States)

    Sebag, Jacques; Vogiatzis, Konstantinos

    2010-07-01

    The LSST camera is located above the LSST primary/tertiary mirror and in front of the secondary mirror in the shadow of its central obscuration. Due to this position within the optical path, heat released from the camera has a potential impact on the seeing degradation that is larger than traditionally estimated for Cassegrain or Nasmyth telescope configurations. This paper presents the results of thermal seeing modeling combined with Computational Fluid Dynamics (CFD) analyzes to define the thermal requirements on the LSST camera. Camera power output fluxes are applied to the CFD model as boundary conditions to calculate the steady-state temperature distribution on the camera and the air inside the enclosure. Using a previously presented post-processing analysis to calculate the optical seeing based on the mechanical turbulence and temperature variations along the optical path, the optical performance resulting from the seeing is determined. The CFD simulations are repeated for different wind speeds and orientations to identify the worst case scenario and generate an estimate of seeing contribution as a function of camera-air temperature difference. Finally, after comparing with the corresponding error budget term, a maximum allowable temperature for the camera is selected.

  5. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    Science.gov (United States)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  6. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG...... knowledge there has been no systematic study of the validity of the Markov assumption wrt.\\ web usage mining and the resulting quality of the mined browsing patterns. In this paper we systematically investigate the quality of browsing patterns mined from structures based on the Markov assumption. Formal...... measures of quality, based on the closeness of the mined patterns to the true traversal patterns, are defined and an extensive experimental evaluation is performed, based on two substantial real-world data sets. The results indicate that a large number of rules must be considered to achieve high quality...

  7. Requirements-Driven Deployment: Customizing the Requirements Model for the Host Environment

    NARCIS (Netherlands)

    Ali, Raian; Dalpiaz, Fabiano; Giorgini, Paolo

    2014-01-01

    Deployment is a main development phase which configures a software to be ready for use in a certain environment. The ultimate goal of deployment is to enable users to achieve their requirements while using the deployed software. However, requirements are not uniform and differ between deployment env

  8. A 2nd generation static model of greenhouse energy requirements (horticern) : a comparison with dynamic models

    CERN Document Server

    Jolliet, O; Munday, G L

    1989-01-01

    Optimisation of a greenhouse and its components requires a suitable model permitting precise determination of its energy requirements. Existing static models are simple but lack precision; dynamic models though more precise, are unsuitable for use over long periods and difficult to handle in practice. A theoretical study and measurements from the CERN trial greenhouse have allowed the development of new static model named "HORTICERN", precise and easy to use for predicting energy consumption and which takes into account effects of solar energy, wind and radiative loss to the sky. This paper compares the HORTICERN model with the dynamic models of Bot, Takakura, Van Bavel and Gembloux, and demonstrates that its precision is comparable; differences on average being less than 5%, it is independent of type of greenhouse (e.g. single or double glazing, Hortiplus, etc.) and climate. The HORTICERN method has been developed for PC use and is proving to be a powerful tool for greenhouse optimisation by research work...

  9. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG......) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...

  10. Defining Requirements and Applying Information Modeling for Protecting Enterprise Assets

    Science.gov (United States)

    Fortier, Stephen C.; Volk, Jennifer H.

    The advent of terrorist threats has heightened local, regional, and national governments' interest in emergency response and disaster preparedness. The threat of natural disasters also challenges emergency responders to act swiftly and in a coordinated fashion. When a disaster occurs, an ad hoc coalition of pre-planned groups usually forms to respond to the incident. History has shown that these “system of systems” do not interoperate very well. Communications between fire, police and rescue components either do not work or are inefficient. Government agencies, non-governmental organizations (NGOs), and private industry use a wide array of software platforms for managing data about emergency conditions, resources and response activities. Most of these are stand-alone systems with very limited capability for data sharing with other agencies or other levels of government. Information technology advances have facilitated the movement towards an integrated and coordinated approach to emergency management. Other communication mechanisms, such as video teleconferencing, digital television and radio broadcasting, are being utilized to combat the challenges of emergency information exchange. Recent disasters, such as Hurricane Katrina and the tsunami in Indonesia, have illuminated the weaknesses in emergency response. This paper will discuss the need for defining requirements for components of ad hoc coalitions which are formed to respond to disasters. A goal of our effort was to develop a proof of concept that applying information modeling to the business processes used to protect and mitigate potential loss of an enterprise was feasible. These activities would be modeled both pre- and post-incident.

  11. Evaluating risk factor assumptions: a simulation-based approach

    Directory of Open Access Journals (Sweden)

    Miglioretti Diana L

    2011-09-01

    Full Text Available Abstract Background Microsimulation models are an important tool for estimating the comparative effectiveness of interventions through prediction of individual-level disease outcomes for a hypothetical population. To estimate the effectiveness of interventions targeted toward high risk groups, the mechanism by which risk factors influence the natural history of disease must be specified. We propose a method for evaluating these risk factor assumptions as part of model-building. Methods We used simulation studies to examine the impact of risk factor assumptions on the relative rate (RR of colorectal cancer (CRC incidence and mortality for a cohort with a risk factor compared to a cohort without the risk factor using an extension of the CRC-SPIN model for colorectal cancer. We also compared the impact of changing age at initiation of screening colonoscopy for different risk mechanisms. Results Across CRC-specific risk factor mechanisms, the RR of CRC incidence and mortality decreased (towards one with increasing age. The rate of change in RRs across age groups depended on both the risk factor mechanism and the strength of the risk factor effect. Increased non-CRC mortality attenuated the effect of CRC-specific risk factors on the RR of CRC when both were present. For each risk factor mechanism, earlier initiation of screening resulted in more life years gained, though the magnitude of life years gained varied across risk mechanisms. Conclusions Simulation studies can provide insight into both the effect of risk factor assumptions on model predictions and the type of data needed to calibrate risk factor models.

  12. Optical tests of Bell's inequalities not resting upon the absurd fair sampling assumption

    CERN Document Server

    Santos, E

    2004-01-01

    A simple local hidden-variables model is exhibited which reproduces the results of all performed tests of Bell\\'{}s inequalities involving optical photon pairs. For the old atomic-cascade experiments, like Aspect\\'{}s, the model agrees with quantum mechanics even for ideal set-ups. For more recent experiments, using parametric down-converted photons, the agreement occurs only for actual experiments, involving low efficiency detectors. Arguments are given against the fair sampling assumption, currently combined with the results of the experiments in order to claim a contradiction with local realism. New tests are proposed which are able to discriminate between quantum mechanics and a restricted, but appealing, family of local hidden-variables models. Such tests require detectors with efficiencies just above 20%.

  13. Keeping Things Simple: Why the Human Development Index Should Not Diverge from Its Equal Weights Assumption

    Science.gov (United States)

    Stapleton, Lee M.; Garrod, Guy D.

    2007-01-01

    Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…

  14. Making Foundational Assumptions Transparent: Framing the Discussion about Group Communication and Influence

    Science.gov (United States)

    Meyers, Renee A.; Seibold, David R.

    2009-01-01

    In this article, the authors seek to augment Dean Hewes's (1986, 1996) intriguing bracketing and admirable larger effort to "return to basic theorizing in the study of group communication" by making transparent the foundational, and debatable, assumptions that underlie those models. Although these assumptions are addressed indirectly by Hewes, the…

  15. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Science.gov (United States)

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  16. Inference and Assumption in Historical Seismology

    Science.gov (United States)

    Musson, R. M. W.

    The principal aim in studies of historical earthquakes is usually to be able to derive parameters for past earthquakes from macroseismic or other data and thus extend back in time parametric earthquake catalogues, often with improved seismic hazard studies as the ultimate goal. In cases of relatively recent historical earthquakes, for example, those of the 18th and 19th centuries, it is often the case that there is such an abundance of available macroseismic data that estimating earthquake parameters is relatively straightforward. For earlier historical periods, especially medieval and earlier, and also for areas where settlement or documentation are sparse, the situation is much harder. The seismologist often finds that he has only a few data points (or even one) for an earthquake that nevertheless appears to be regionally significant.In such cases, it is natural that the investigator will attempt to make the most of the available data, expanding it by making working assumptions, and from these deriving conclusions by inference (i.e. the process of proceeding logically from some premise). This can be seen in a number of existing studies; in some cases extremely slight data are so magnified by the use of inference that one must regard the results as tentative in the extreme. Two main types of inference can be distinguished. The first type is inference from documentation. This is where assumptions are made such as: the absence of a report of the earthquake from this monastic chronicle indicates that at this locality the earthquake was not felt. The second type is inference from seismicity. Here one deals with arguments such as all recent earthquakes felt at town X are events occurring in seismic zone Y, therefore this ancient earthquake which is only reported at town X probably also occurred in this zone.

  17. Halo-Independent Direct Detection Analyses Without Mass Assumptions

    CERN Document Server

    Anderson, Adam J; Kahn, Yonatan; McCullough, Matthew

    2015-01-01

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the $m_\\chi-\\sigma_n$ plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the $v_{min}-\\tilde{g}$ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from $v_{min}$ to nuclear recoil momentum ($p_R$), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call $\\tilde{h}(p_R)$. The entire family of conventional halo-independent $\\tilde{g}(v_{min})$ plots for all DM masses are directly found from the single $\\tilde{h}(p_R)$ plot through a simple re...

  18. Estimating the global prevalence of inadequate zinc intake from national food balance sheets: effects of methodological assumptions.

    Directory of Open Access Journals (Sweden)

    K Ryan Wessells

    Full Text Available BACKGROUND: The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population's theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1 evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2 generate a model considered to provide the best estimates. METHODOLOGY AND PRINCIPAL FINDINGS: National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation. Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12-66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57-0.99, P<0.01. A "best-estimate" model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. CONCLUSIONS AND SIGNIFICANCE: Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public

  19. Closed loop models for analyzing engineering requirements for simulators

    Science.gov (United States)

    Baron, S.; Muralidharan, R.; Kleinman, D.

    1980-01-01

    A closed loop analytic model, incorporating a model for the human pilot, (namely, the optimal control model) that would allow certain simulation design tradeoffs to be evaluated quantitatively was developed. This model was applied to a realistic flight control problem. The resulting model is used to analyze both overall simulation effects and the effects of individual elements. The results show that, as compared to an ideal continuous simulation, the discrete simulation can result in significant performance and/or workload penalties.

  20. Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David

    2002-02-14

    Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model.

  1. Towards New Probabilistic Assumptions in Business Intelligence

    Directory of Open Access Journals (Sweden)

    Schumann Andrew

    2015-01-01

    Full Text Available One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot be observable and additive in principle. These variables can be called symbolic values or symbolic meanings and studied within symbolic interactionism, the theory developed since George Herbert Mead and Herbert Blumer. In statistical and econometric tools of business intelligence we accept only phenomena with causal connections measured by additive measures. In the paper we show that in the social world we deal with symbolic interactions which can be studied by non-additive labels (symbolic meanings or symbolic values. For accepting the variety of such phenomena we should avoid additivity of basic labels and construct a new probabilistic method in business intelligence based on non-Archimedean probabilities.

  2. Economic Growth Assumptions in Climate and Energy Policy

    Directory of Open Access Journals (Sweden)

    Nir Y. Krakauer

    2014-03-01

    Full Text Available The assumption that the economic growth seen in recent decades will continue has dominated the discussion of future greenhouse gas emissions and the mitigation of and adaptation to climate change. Given that long-term economic growth is uncertain, the impacts of a wide range of growth trajectories should be considered. In particular, slower economic growth would imply that future generations will be relatively less able to invest in emissions controls or adapt to the detrimental impacts of climate change. Taking into consideration the possibility of economic slowdown therefore heightens the urgency of reducing greenhouse gas emissions now by moving to renewable energy sources, even if this incurs short-term economic cost. I quantify this counterintuitive impact of economic growth assumptions on present-day policy decisions in a simple global economy-climate model (Dynamic Integrated model of Climate and the Economy (DICE. In DICE, slow future growth increases the economically optimal present-day carbon tax rate and the utility of taxing carbon emissions, although the magnitude of the increase is sensitive to model parameters, including the rate of social time preference and the elasticity of the marginal utility of consumption. Future scenario development should specifically include low-growth scenarios, and the possibility of low-growth economic trajectories should be taken into account in climate policy analyses.

  3. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  4. Statistical Tests of the PTHA Poisson Assumption for Submarine Landslides

    Science.gov (United States)

    Geist, E. L.; Chaytor, J. D.; Parsons, T.; Ten Brink, U. S.

    2012-12-01

    We demonstrate that a sequence of dated mass transport deposits (MTDs) can provide information to statistically test whether or not submarine landslides associated with these deposits conform to a Poisson model of occurrence. Probabilistic tsunami hazard analysis (PTHA) most often assumes Poissonian occurrence for all sources, with an exponential distribution of return times. Using dates that define the bounds of individual MTDs, we first describe likelihood and Monte Carlo methods of parameter estimation for a suite of candidate occurrence models (Poisson, lognormal, gamma, Brownian Passage Time). In addition to age-dating uncertainty, both methods incorporate uncertainty caused by the open time intervals: i.e., before the first and after the last event to the present. Accounting for these open intervals is critical when there are a small number of observed events. The optimal occurrence model is selected according to both the Akaike Information Criteria (AIC) and Akaike's Bayesian Information Criterion (ABIC). In addition, the likelihood ratio test can be performed on occurrence models from the same family: e.g., the gamma model relative to the exponential model of return time distribution. Parameter estimation, model selection, and hypothesis testing are performed on data from two IODP holes in the northern Gulf of Mexico that penetrated a total of 14 MTDs, some of which are correlated between the two holes. Each of these events has been assigned an age based on microfossil zonations and magnetostratigraphic datums. Results from these sites indicate that the Poisson assumption is likely valid. However, parameter estimation results using the likelihood method for one of the sites suggest that the events may have occurred quasi-periodically. Methods developed in this study provide tools with which one can determine both the rate of occurrence and the statistical validity of the Poisson assumption when submarine landslides are included in PTHA.

  5. Validity of the Michaelis-Menten equation--steady-state or reactant stationary assumption: that is the question.

    Science.gov (United States)

    Schnell, Santiago

    2014-01-01

    The Michaelis-Menten equation is generally used to estimate the kinetic parameters, V and K(M), when the steady-state assumption is valid. Following a brief overview of the derivation of the Michaelis-Menten equation for the single-enzyme, single-substrate reaction, a critical review of the criteria for validity of the steady-state assumption is presented. The application of the steady-state assumption makes the implicit assumption that there is an initial transient during which the substrate concentration remains approximately constant, equal to the initial substrate concentration, while the enzyme-substrate complex concentration builds up. This implicit assumption is known as the reactant stationary assumption. This review presents evidence showing that the reactant stationary assumption is distinct from and independent of the steady-state assumption. Contrary to the widely believed notion that the Michaelis-Menten equation can always be applied under the steady-state assumption, the reactant stationary assumption is truly the necessary condition for validity of the Michaelis-Menten equation to estimate kinetic parameters. Therefore, the application of the Michaelis-Menten equation only leads to accurate estimation of kinetic parameters when it is used under experimental conditions meeting the reactant stationary assumption. The criterion for validity of the reactant stationary assumption does not require the restrictive condition of choosing a substrate concentration that is much higher than the enzyme concentration in initial rate experiments. © 2013 FEBS.

  6. Linear irreversible heat engines based on local equilibrium assumptions

    Science.gov (United States)

    Izumida, Yuki; Okuda, Koji

    2015-08-01

    We formulate an endoreversible finite-time Carnot cycle model based on the assumptions of local equilibrium and constant energy flux, where the efficiency and the power are expressed in terms of the thermodynamic variables of the working substance. By analyzing the entropy production rate caused by the heat transfer in each isothermal process during the cycle, and using the endoreversible condition applied to the linear response regime, we identify the thermodynamic flux and force of the present system and obtain a linear relation that connects them. We calculate the efficiency at maximum power in the linear response regime by using the linear relation, which agrees with the Curzon-Ahlborn (CA) efficiency known as the upper bound in this regime. This reason is also elucidated by rewriting our model into the form of the Onsager relations, where our model turns out to satisfy the tight-coupling condition leading to the CA efficiency.

  7. Tests of the frozen-flux and tangentially geostrophic assumptions using magnetic satellite data

    DEFF Research Database (Denmark)

    Chulliat, A.; Olsen, Nils; Sabaka, T.

    the very large number of flows explaining the observed secular variation under the frozen-flux assumption alone. More recently, it has been shown that the combined frozen-flux and tangentially geostrophic assumptions translate into constraints on the secular variation whose mathematics are now well...... understood. Using these constraints, we test the combined frozen-flux and tangentially geostrophic assumptions against recent, high-precision magnetic data provided by the and CHAMP satellites. The methodology involves building constrained field models using least-squares methods. Two types of models...

  8. Bioenergy crop models: Descriptions, data requirements and future challenges

    Energy Technology Data Exchange (ETDEWEB)

    Nair, S. Surendran [University of Tennessee, Knoxville (UTK); Kang, Shujiang [ORNL; Zhang, Xuesong [Pacific Northwest National Laboratory (PNNL); Miguez, Fernando [Iowa State University; Izaurralde, Dr. R. Cesar [Pacific Northwest National Laboratory (PNNL); Post, Wilfred M [ORNL; Dietze, Michael [University of Illinois, Urbana-Champaign; Lynd, L. [Dartmouth College; Wullschleger, Stan D [ORNL

    2012-01-01

    Field studies that address the production of lignocellulosic biomass as a source of renewable energy provide critical data for the development of bioenergy crop models. A literature survey revealed that 14 models have been used for simulating bioenergy crops including herbaceous and woody bioenergy crops, and for crassulacean acid metabolism (CAM) crops. These models simulate field-scale production of biomass for switchgrass (ALMANAC, EPIC, and Agro-BGC), miscanthus (MISCANFOR, MISCANMOD, and WIMOVAC), sugarcane (APSIM, AUSCANE, and CANEGRO), and poplar and willow (SECRETS and 3PG). Two models are adaptations of dynamic global vegetation models and simulate biomass yields of miscanthus and sugarcane at regional scales (Agro-IBIS and LPJmL). Although it lacks the complexity of other bioenergy crop models, the environmental productivity index (EPI) is the only model used to estimate biomass production of CAM (Agave and Opuntia) plants. Except for the EPI model, all models include representations of leaf area dynamics, phenology, radiation interception and utilization, biomass production, and partitioning of biomass to roots and shoots. A few models simulate soil water, nutrient, and carbon cycle dynamics, making them especially useful for assessing the environmental consequences (e.g., erosion and nutrient losses) associated with the large-scale deployment of bioenergy crops. The rapid increase in use of models for energy crop simulation is encouraging; however, detailed information on the influence of climate, soils, and crop management practices on biomass production is scarce. Thus considerable work remains regarding the parameterization and validation of process-based models for bioenergy crops; generation and distribution of high-quality field data for model development and validation; and implementation of an integrated framework for efficient, high-resolution simulations of biomass production for use in planning sustainable bioenergy systems.

  9. A new scenario framework for climate change research: the concept of shared climate policy assumptions

    NARCIS (Netherlands)

    Kriegler, E.; Edmonds, J.; Hallegatte, S.; Ebi, K.L.; Kram, T.; Riahi, K.; Winkler, J.; van Vuuren, Detlef|info:eu-repo/dai/nl/11522016X

    2014-01-01

    The new scenario framework facilitates the coupling of multiple socioeconomic reference pathways with climate model products using the representative concentration pathways. This will allow for improved assessment of climate impacts, adaptation and mitigation. Assumptions about climate policy play a

  10. DDH-like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike;

    2011-01-01

    generalized to use instead d-DDH, and we show in the generic group model that d-DDH is harder than DDH. This means that virtually any application of DDH can now be realized with the same (amortized) efficiency, but under a potentially weaker assumption. On the negative side, we also show that d-DDH, just like...... DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...... and that in fact the problems become harder with increasing d and hence form an infinite hierarchy. We show that hardness of VDDH implies CCA-secure encryption, efficient Naor-Reingold style pseudorandom functions, and auxiliary input secure encryption, a strong form of leakage resilience. This can be seen...

  11. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Science.gov (United States)

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  12. Achieving a System Operational Availability Requirement (ASOAR) Model

    Science.gov (United States)

    1992-07-01

    ASOAR requires only system and end item level input data, not Line Replaceable Unit (LRU) Input data. ASOAR usage provides concepts for major logistics...the Corp/Theater ADP Service Center II (CTASC II) to a systen operational availabilty goal. The CTASC II system configuration had many redundant types

  13. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, Rampal S.; Alonso, David; McKane, Alan J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the a

  14. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, R.S.; Alonso, D.; McKane, A.J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the

  15. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  16. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  17. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-05-23

    Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.

  18. WAVS radiation shielding references and assumptions

    Energy Technology Data Exchange (ETDEWEB)

    McLean, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-07

    At ITER, the confluence of a high radiation environment and the requirement for high performance imaging for plasma and plasma-facing surface diagnosis will necessitate extensive application of radiation shielding. Recommended here is a dual-layer shield design composed of lead for gamma attenuation, surrounded by a fire-resistant polyehtylene doped with a thermal neutron absorber for neutron shielding.

  19. Decision-Theoretic Planning: Structural Assumptions and Computational Leverage

    CERN Document Server

    Boutilier, C; Hanks, S; 10.1613/jair.575

    2011-01-01

    Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the...

  20. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    Science.gov (United States)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  1. Early-type galaxies at z~1.3. II. Masses and ages of early-type galaxies in different environments and their dependence on stellar population model assumptions

    CERN Document Server

    Raichoor, A; Nakata, F; Stanford, S A; Holden, B P; Rettura, A; Huertas-Company, M; Postman, M; Rosati, P; Blakeslee, J P; Demarco, R; Eisenhardt, P; Illingworth, G; Jee, M J; Kodama, T; Tanaka, M; White, R L

    2011-01-01

    We have derived masses and ages for 79 early-type galaxies (ETGs) in different environments at z~1.3 in the Lynx supercluster and in the GOODS/CDF-S field using multiwavelength (0.6-4.5 $\\mu$m; KPNO, Palomar, Keck, HST, Spitzer) datasets. At this redshift the contribution of the TP-AGB phase is important for ETGs, and the mass and age estimates depend on the choice of the stellar population model used in the spectral energy distribution fits. We describe in detail the differences among model predictions for a large range of galaxy ages, showing the dependence of these differences on age. Current models still yield large uncertainties. While recent models from Maraston and Charlot & Bruzual offer better modeling of the TP-AGB phase with respect to less recent Bruzual & Charlot models, their predictions do not often match. The modeling of this TP-AGB phase has a significant impact on the derived parameters for galaxies observed at high-redshift. Some of our results do not depend on the choice of the mod...

  2. Requirements and Problems in Parallel Model Development at DWD

    Directory of Open Access Journals (Sweden)

    Ulrich Schäattler

    2000-01-01

    Full Text Available Nearly 30 years after introducing the first computer model for weather forecasting, the Deutscher Wetterdienst (DWD is developing the 4th generation of its numerical weather prediction (NWP system. It consists of a global grid point model (GME based on a triangular grid and a non-hydrostatic Lokal Modell (LM. The operational demand for running this new system is immense and can only be met by parallel computers. From the experience gained in developing earlier NWP models, several new problems had to be taken into account during the design phase of the system. Most important were portability (including efficieny of the programs on several computer architectures and ease of code maintainability. Also the organization and administration of the work done by developers from different teams and institutions is more complex than it used to be. This paper describes the models and gives some performance results. The modular approach used for the design of the LM is explained and the effects on the development are discussed.

  3. Modelling and Simulation for Requirements Engineering and Options Analysis

    Science.gov (United States)

    2010-05-01

    Defence, 2010 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2010 DRDC Toronto CR 2010...externalize their mental model of the assumed solution for critique and correction by others, and whether or not this would assist in ensuring that

  4. Predicting Flu Season Requirements: An Undergraduate Modeling Project

    Science.gov (United States)

    Kramlich, Gary R., II; Braunstein Fierson, Janet L.; Wright, J. Adam

    2010-01-01

    This project was designed to be used in a freshman calculus class whose students had already been introduced to logistic functions and basic data modeling techniques. It need not be limited to such an audience, however; it has also been implemented in a topics in mathematics class for college upperclassmen. Originally intended to be presented in…

  5. Thermal Modeling and Feedback Requirements for LIFE Neutronic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Seifried, J E

    2009-07-15

    An initial study is performed to determine how temperature considerations affect LIFE neutronic simulations. Among other figures of merit, the isotopic mass accumulation, thermal power, tritium breeding, and criticality are analyzed. Possible fidelities of thermal modeling and degrees of coupling are explored. Lessons learned from switching and modifying nuclear datasets is communicated.

  6. Mathematical Formulation Requirements and Specifications for the Process Models

    Energy Technology Data Exchange (ETDEWEB)

    Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.

    2010-11-01

    The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments

  7. "Open Access" Requires Clarification: Medical Journal Publication Models Evolve.

    Science.gov (United States)

    Lubowitz, James H; Brand, Jefferson C; Rossi, Michael J; Provencher, Matthew T

    2017-03-01

    While Arthroscopy journal is a traditional subscription model journal, our companion journal Arthroscopy Techniques is "open access." We used to believe open access simply meant online and free of charge. However, while open-access journals are free to readers, in 2017 authors must make a greater sacrifice in the form of an article-processing charge (APC). Again, while this does not apply to Arthroscopy, the APC will apply to Arthroscopy Techniques.

  8. Testing assumptions for conservation of migratory shorebirds and coastal managed wetlands

    Science.gov (United States)

    Collazo, Jaime; James Lyons,; Herring, Garth

    2015-01-01

    Managed wetlands provide critical foraging and roosting habitats for shorebirds during migration; therefore, ensuring their availability is a priority action in shorebird conservation plans. Contemporary shorebird conservation plans rely on a number of assumptions about shorebird prey resources and migratory behavior to determine stopover habitat requirements. For example, the US Shorebird Conservation Plan for the Southeast-Caribbean region assumes that average benthic invertebrate biomass in foraging habitats is 2.4 g dry mass m−2 and that the dominant prey item of shorebirds in the region is Chironomid larvae. For effective conservation and management, it is important to test working assumptions and update predictive models that are used to estimate habitat requirements. We surveyed migratory shorebirds and sampled the benthic invertebrate community in coastal managed wetlands of South Carolina. We sampled invertebrates at three points in time representing early, middle, and late stages of spring migration, and concurrently surveyed shorebird stopover populations at approximately 7-day intervals throughout migration. We used analysis of variance by ranks to test for temporal variation in invertebrate biomass and density, and we used a model based approach (linear mixed model and Monte Carlo simulation) to estimate mean biomass and density. There was little evidence of a temporal variation in biomass or density during the course of spring shorebird migration, suggesting that shorebirds did not deplete invertebrate prey resources at our site. Estimated biomass was 1.47 g dry mass m−2 (95 % credible interval 0.13–3.55), approximately 39 % lower than values used in the regional shorebird conservation plan. An additional 4728 ha (a 63 % increase) would be required if habitat objectives were derived from biomass levels observed in our study. Polychaetes, especially Laeonereis culveri(2569 individuals m−2), were the most abundant prey in foraging

  9. Measuring sound absorption using local field assumptions

    NARCIS (Netherlands)

    Kuipers, E.R.

    2013-01-01

    To more effectively apply acoustically absorbing materials, it is desirable to measure angle-dependent sound absorption coefficients, preferably in situ. Existing measurement methods are based on an overall model of the acoustic field in front of the absorber, and are therefore sensitive to

  10. Public key cryptography from weaker assumptions

    DEFF Research Database (Denmark)

    Zottarel, Angela

    This dissertation is focused on the construction of public key cryptographic primitives and on the relative security analysis in a meaningful theoretic model. This work takes two orthogonal directions. In the first part, we study cryptographic constructions preserving their security properties also...

  11. Requirements for Logical Models for Value-Added Tax Legislation

    DEFF Research Database (Denmark)

    Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis

    -specific needs. Currently, these difficulties are handled in most major ERP systems by customising and localising the native code of the ERP systems for each specific country and industry. We propose an alternative that uses logical modeling of VAT legislation. The potential benefit is to eventually transform......Enterprise resource planning (ERP) systems are ubiquitous in commercial enterprises of all sizes and invariably need to account for the notion of value-added tax (VAT). The legal and technical difficulties in handling VAT are exacerbated by spanning a broad and chaotic spectrum of intricate country...

  12. Assumptions of Customer Knowledge Enablement in the Open Innovation Process

    Directory of Open Access Journals (Sweden)

    Jokubauskienė Raminta

    2017-08-01

    Full Text Available In the scientific literature, open innovation is one of the most effective means to innovate and gain a competitive advantage. In practice, there is a variety of open innovation activities, but, nevertheless, customers stand as the cornerstone in this area, since the customers’ knowledge is one of the most important sources of new knowledge and ideas. Evaluating the context where are the interactions of open innovation and customer knowledge enablement, it is necessary to take into account the importance of customer knowledge management. Increasingly it is highlighted that customers’ knowledge management facilitates the creation of innovations. However, it should be an examination of other factors that influence the open innovation, and, at the same time, customers’ knowledge management. This article presents a theoretical model, which reveals the assumptions of open innovation process and the impact on the firm’s performance.

  13. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  14. Contemporary assumptions on human nature and work and approach to human potential managing

    Directory of Open Access Journals (Sweden)

    Vujić Dobrila

    2006-01-01

    Full Text Available A general problem of this research is to identify if there is a relationship between the assumption on human nature and work (Mcgregor, Argyris, Schein, Steers and Porter and a general organizational model preference, as well as a mechanism of human resource management? This research was carried out in 2005/2006. The sample consisted of 317 subjects (197 managers, 105 highly educated subordinates and 15 entrepreneurs in 7 big enterprises in a group of small business enterprises differentiating in terms of the entrepreneur’s structure and a type of activity. A general hypothesis "that assumptions on human nature and work are statistically significant in connection to the preference approach (models, of work motivation commitment", has been confirmed. A specific hypothesis have been also confirmed: ·The assumptions on a human as a rational economic being are statistically significant in correlation with only two mechanisms of traditional models, the mechanism of method work control and the working discipline mechanism. ·Statistically significant assumptions on a human as a social being are correlated with all mechanisms of engaging employees, which belong to the model of the human relations, except the mechanism introducing the adequate type of prizes for all employees independently of working results. ·The assumptions on a human as a creative being are statistically significant, positively correlating with preference of two mechanisms belonging to the human resource model by investing into education and training and making conditions for the application of knowledge and skills. The young with assumptions on a human as a creative being prefer much broader repertoire of mechanisms belonging to the human resources model from the remaining category of subjects in the pattern. The connection between the assumption on human nature and preference models of engaging appears especially in the sub-pattern of managers, in the category of young subjects

  15. Unrealistic Assumptions in Economics: an Analysis under the Logic of Socioeconomic Processes

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2014-11-01

    Full Text Available The realism of assumptions is an ongoing debate within the philosophy of economics. One of the most referenced papers in this matter belongs to Milton Friedman. He defends the use of unrealistic assumptions, not only because of a pragmatic issue, but also the intrinsic difficulties of determining the extent of realism. On the other hand, realists have criticized (and still do today the use of unrealistic assumptions - such as the assumption of rational choice, perfect information, homogeneous goods, etc. However, they did not accompany their statements with a proper epistemological argument that supports their positions. In this work it is expected to show that the realism of (a particular sort of assumptions is clearly relevant when examining economic models, since the system under study (the real economies is not compatible with logic of invariance and of mechanisms, but with the logic of possibility trees. Because of this, models will not function as tools for predicting outcomes, but as representations of alternative scenarios, whose similarity to the real world will be examined in terms of the verisimilitude of a class of model assumptions

  16. Modelo Century de dinâmica da matéria orgânica do solo: equações e pressupostos Century model of soil organic matter dynamics: equations and assumptions

    Directory of Open Access Journals (Sweden)

    Luiz Fernando Carvalho Leite

    2003-08-01

    Full Text Available A modelagem de processos biológicos tem por objetivos o planejamento do uso da terra, o estabelecimento de padrões ambientais e as estimativas dos riscos reais e potenciais das atividades agrícolas e ambientais. Diversos modelos têm sido criados nos últimos 25 anos. Century é um modelo mecanístico que analisa em longo prazo a dinâmica da matéria orgânica do solo e de nutrientes no sistema solo-planta em diversos agroecossistemas. O submodelo de matéria orgânica do solo possui os compartimentos ativo (biomassa microbiana e produtos, lento (produtos microbianos e vegetais, fisicamente protegidos ou biologicamente resistentes à decomposição e passivo (quimicamente recalcitrante ou também fisicamente protegido com diferentes taxas de decomposição. Equações de primeira ordem são usadas para modelar todos os compartimentos da matéria orgânica do solo e a temperatura e umidade do solo modificam as taxas de decomposição. A reciclagem do compartimento ativo e a formação do passivo são controladas pelo teor de areia e de argila do solo, respectivamente. Os resíduos vegetais são divididos em compartimentos dependentes dos teores de lignina e nitrogênio. Por meio do modelo, pode-se relacionar matéria orgânica aos níveis de fertilidade e ao manejo atual e futuro, otimizando o entendimento das transformações dos nutrientes em solos de diversos agroecossistemas.The modeling of biological processes has as objectives the planning of land use, setting environmental standards and estimating the actual and potential risks of the agricultural and environmental activities. Several models have been created in the last 25 years. Century is a mechanistic model that analyzes in long-term the dynamics of soil organic matter and of nutrients in soil-plant system in several agroecosystems. The soil organic matter submodel has the active (microbial biomass and products, slow (plant and microbial products that are physically protected or

  17. Studies on Models,Patterns and Require-ments of Digestible Amino Acids for Layers by Nitrogen Metabolism

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The nitrogen (N) metabolic experiments were made to estimate separately amino acid requirements of 43~48 weeks old layers for maintenance, for protein accretion to estabolish models to estimate digestible amino acid requirements. The regression relationship of nitrogen retention vs amino acid intake was estimated for each amino acid by giving, at rate of N intake of 0.91, 0.52, 0.15 and 0.007g.kg-1 body-weight (W0.75) per d, the semi-synthetic diets was made specially deficient in one amino acid. From the regression coefficients, it was calculated that, for the accretion of 1 g protein, the dietary digestible amino acid requirements were (mg) Thr 63.1, Val 100.4, Met 39.9, Ile 88.6, Leu 114.3, Phe 63.2, Lys 87.0, His 20.5, Arg 87.9, Trp 21.4, Met+Cys 77.6, and Phe+Tyr 114.3. Daily amino acid requirements for N equilibrium were estimated to be (mg.kg-1W0.75 per day) Thr 50.6, Val 74.7, Met 30.3, ILe 66.7 Leu 81.4, Phe 44.8, Lys 60.5 His 14.7, Arg 73.9 ,Trp 17.3, Met+Cys 58.6, and Phe+Tyr 83.9 The dietary degestible amino acid patterns for protein accretion and N equilibrium were also proposed. The models of estimating digestible amino acid requirements for the different productions were developed.

  18. Finansal Varlıkları Fiyatlama Modelinin Analizi: Varsayımlar, Bulgular ve Hakkındaki Eleştiriler(An Analysis of Capital Asset Pricing Model: Assumptions, Arguments and Critics

    Directory of Open Access Journals (Sweden)

    Hakan Bilir

    2016-03-01

    Full Text Available Yatırım fırsatlarının değerlendirilmesi süreci beklene getiri ve riskin ölçümüne bağlıdır. Finansal Varlıkları Fiyatlama Modeli (CAPM, çok uzun yıllardır modern finans teorisinin temel taşlarından bir tanesini oluşturmaktadır. Model, varlıkların beklenen getirisi ve sistematik riski arasındaki basit doğrusal ilişkiyi ortaya koymaktadır. Model halen, sermaye maliyetinin hesaplanması, portföy yönetiminin performansının ölçülmesi ve yatırımların değerlendirilmesi amacıyla kullanılmaktadır. CAPM’in çekiciliği, riskin ve beklenen getiri ve risk arasındaki ilişkinin ölçümlenmesi konusundaki güçlü tahmin yeteneğinden gelmektedir. Bununla birlikte modelin bu yeteneği 30 yılı aşkın bir süredir akademisyenler ve uygulamacılar tarafından sorgulanmaktadır. Tartışmalar büyük ölçüde ampirik düzeyde gerçekleştirilmektedir. CAPM’in ampirik düzeydeki problemleri, çok sayıda basitleştirilmiş varsayımı içermesi nedeniyle teorik hatalardır. Çok sayıdaki gerçekçi olmayan varsayımlar modeli pratik olarak kullanışsız hale getirmektedir. Model ile ilgili temel eleştiriler ise risksiz faiz oranı, pazar portföyü ve beta katsayı üzerinde yoğunlaşmaktadır.

  19. Legal assumptions for private company claim for additional (supplementary payment

    Directory of Open Access Journals (Sweden)

    Šogorov Stevan

    2011-01-01

    Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.

  20. Sensitivity analysis of incomplete longitudinal data departing from the missing at random assumption: Methodology and application in a clinical trial with drop-outs.

    Science.gov (United States)

    Moreno-Betancur, M; Chavance, M

    2016-08-01

    Statistical analyses of longitudinal data with drop-outs based on direct likelihood, and using all the available data, provide unbiased and fully efficient estimates under some assumptions about the drop-out mechanism. Unfortunately, these assumptions can never be tested from the data. Thus, sensitivity analyses should be routinely performed to assess the robustness of inferences to departures from these assumptions. However, each specific scientific context requires different considerations when setting up such an analysis, no standard method exists and this is still an active area of research. We propose a flexible procedure to perform sensitivity analyses when dealing with continuous outcomes, which are described by a linear mixed model in an initial likelihood analysis. The methodology relies on the pattern-mixture model factorisation of the full data likelihood and was validated in a simulation study. The approach was prompted by a randomised clinical trial for sleep-maintenance insomnia treatment. This case study illustrated the practical value of our approach and underlined the need for sensitivity analyses when analysing data with drop-outs: some of the conclusions from the initial analysis were shown to be reliable, while others were found to be fragile and strongly dependent on modelling assumptions. R code for implementation is provided.

  1. Forecasting Model of Coal Requirement Quantity Based on Grey System Theory

    Institute of Scientific and Technical Information of China (English)

    孙继湖

    2001-01-01

    The generally used methods of forecasting coal requirement quantity include the analogy method, the outside-push method and the cause-effect analysis method. However, the precision of forecasting results using these methods is lower. This paper uses the grey system theory, and sets up grey forecasting model GM (1, 3) to coal requirement quantity. The forecasting result for the Chinese coal requirement quantity coincides with the actual values, and this shows that the model is reliable. Finally, this model are used to forecast Chinese coal requirement quantity in the future ten years.

  2. Digital Avionics Information System (DAIS): Training Requirements Analysis Model Users Guide. Final Report.

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    This user's guide describes the functions, logical operations and subroutines, input data requirements, and available outputs of the Training Requirements Analysis Model (TRAMOD), a computerized analytical life cycle cost modeling system for use in the early stages of system design. Operable in a stand-alone mode, TRAMOD can be used for the…

  3. Assumption tests regarding the ‘narrow’ rectangles dimensions of the open thin wall sections

    Science.gov (United States)

    Oanta, E.; Panait, C.; Sabau, A.; Barhalescu, M.; Dascalescu, A. E.

    2016-08-01

    Computer based analytic models that use the strength of materials theory are inheriting the accuracy given by the basic simplifying hypotheses. The according assumptions were rationally conceived hundreds of years ago in an age when there was no computing instrument, therefore the minimization of the necessary volume of calculi was an important requirement. An initial study was an attempt to evaluate how ‘thin’ may be the walls of an open section in order to have accurate results using the analytic calculus method. In this initial study there was compared the calculus of the rectangular sections loaded by twisting moments vs. a narrow section under the same load. Being compared analytic methods applied for a simple shape section, a more thorough study was required. In this way, we consider a thin wall open section loaded by a twisting moment, section which is discretized in ‘narrow’ rectangles. The ratio of the sides of the ‘narrow’ rectangles is the variable of the study. We compare the results of the finite element analysis to the results of the analytic method. The conclusions are important for the development of computer based analytic models which use parametrized sections for which different sets of calculus relations may be used.

  4. Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption

    Directory of Open Access Journals (Sweden)

    Jixin Lv

    2015-04-01

    Full Text Available In an indoor environment, slope and edge detection is an important problem in simultaneous localization and mapping (SLAM, which is a basic requirement for mobile robot autonomous navigation. Slope detection allows the robot to find areas that are more traversable while the edge detection can prevent robot from falling. Three-dimensional (3D solutions usually require a large memory and high computational costs. This study proposes an efficient two-dimensional (2D solution to combine slope and edge detection with a line-segment-based extended Kalman filter SLAM (EKF-SLAM in a structured indoor area. The robot is designed to use two fixed 2D laser range finders (LRFs to perform horizontal and vertical scans. With local area orthogonal assumption, the slope and edge are modelled into line segments swiftly from each vertical scan, and then are merged into the EKF-SLAM framework. The EKF-SLAM framework features an optional prediction model that can automatically decide whether the application of iterative closest point (ICP is necessary to compensate for the dead reckoning error. The experimental results demonstrate that the proposed algorithm is capable of building an accurate 2D map swiftly, which contains crucial information of the edge and slope.

  5. Co-Dependency: An Examination of Underlying Assumptions.

    Science.gov (United States)

    Myer, Rick A.; And Others

    1991-01-01

    Discusses need for careful examination of codependency as diagnostic category. Critically examines assumptions that codependency is disease, addiction, or predetermined by the environment. Discusses implications of assumptions. Offers recommendations for mental health counselors focusing on need for systematic research, redirection of efforts to…

  6. Co-Dependency: An Examination of Underlying Assumptions.

    Science.gov (United States)

    Myer, Rick A.; And Others

    1991-01-01

    Discusses need for careful examination of codependency as diagnostic category. Critically examines assumptions that codependency is disease, addiction, or predetermined by the environment. Discusses implications of assumptions. Offers recommendations for mental health counselors focusing on need for systematic research, redirection of efforts to…

  7. Special Theory of Relativity without special assumptions and tachyonic motion

    Directory of Open Access Journals (Sweden)

    E. Kapuścik

    2010-01-01

    Full Text Available The most general form of transformations of space-time coordinates in Special Theory of Relativity based solely on physical assumptions is described. Only the linearity of space-time transformations and the constancy of the speed of light are used as assumptions. The application to tachyonic motion is indicated.

  8. 40 CFR 761.2 - PCB concentration assumptions for use.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false PCB concentration assumptions for use..., AND USE PROHIBITIONS General § 761.2 PCB concentration assumptions for use. (a)(1) Any person may..., oil-filled cable, and rectifiers whose PCB concentration is not established contain PCBs at < 50 ppm...

  9. Meta-Model and UML Profile for Requirements Management of Software and Embedded Systems

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2011-01-01

    Full Text Available Software and embedded system companies today encounter problems related to requirements management tool integration, incorrect tool usage, and lack of traceability. This is due to utilized tools with no clear meta-model and semantics to communicate requirements between different stakeholders. This paper presents a comprehensive meta-model for requirements management. The focus is on software and embedded system domains. The goal is to define generic requirements management domain concepts and abstract interfaces between requirements management and system development. This leads to a portable requirements management meta-model which can be adapted with various system modeling languages. The created meta-model is prototyped by translating it into a UML profile. The profile is imported into a UML tool which is used for rapid evaluation of meta-model concepts in practice. The developed profile is associated with a proof of concept report generator tool that automatically produces up-to-date documentation from the models in form of web pages. The profile is adopted to create an example model of embedded system requirement specification which is built with the profile.

  10. Hybrid supply chain model for material requirement planning under financial constraints: A case study

    Science.gov (United States)

    Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero

    2014-10-01

    Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.

  11. Vocational Didactics: Core Assumptions and Approaches from Denmark, Germany, Norway, Spain and Sweden

    Science.gov (United States)

    Gessler, Michael; Moreno Herrera, Lázaro

    2015-01-01

    The design of vocational didactics has to meet special requirements. Six core assumptions are identified: outcome orientation, cultural-historical embedding, horizontal structure, vertical structure, temporal structure, and the changing nature of work. Different approaches and discussions from school-based systems (Spain and Sweden) and dual…

  12. The National Teacher Corps: A Study of Shifting Goals and Changing Assumptions

    Science.gov (United States)

    Eckert, Sarah Anne

    2011-01-01

    This article investigates the lasting legacy of the National Teacher Corps (NTC), which was created in 1965 by the U.S. federal government with two crucial assumptions: that teaching poor urban children required a very specific skill set and that teacher preparation programs were not providing adequate training in these skills. Analysis reveals…

  13. Weak convergence of Jacobian determinants under asymmetric assumptions

    Directory of Open Access Journals (Sweden)

    Teresa Alberico

    2012-05-01

    Full Text Available Let $\\Om$ be a bounded open set in $\\R^2$ sufficiently smooth and $f_k=(u_k,v_k$ and $f=(u,v$ mappings belong to the Sobolev space $W^{1,2}(\\Om,\\R^2$. We prove that if the sequence of Jacobians $J_{f_k}$ converges to a measure $\\mu$ in sense of measures andif one allows different assumptions on the two components of $f_k$ and $f$, e.g.$$u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,2}(\\Om \\qquad \\, v_k \\rightharpoonup v \\;\\;\\mbox{weakly in} \\;\\; W^{1,q}(\\Om$$for some $q\\in(1,2$, then\\begin{equation}\\label{0}d\\mu=J_f\\,dz.\\end{equation}Moreover, we show that this result is optimal in the sense that conclusion fails for $q=1$.On the other hand, we prove that \\eqref{0} remains valid also if one considers the case $q=1$, but it is necessary to require that $u_k$ weakly converges to $u$ in a Zygmund-Sobolev space with a slightly higher degree of regularity than $W^{1,2}(\\Om$ and precisely$$ u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,L^2 \\log^\\alpha L}(\\Om$$for some $\\alpha >1$.    

  14. Cleanup of contaminated soil -- Unreal risk assumptions: Contaminant degradation

    Energy Technology Data Exchange (ETDEWEB)

    Schiffman, A. [New Jersey Department of Environmental Protection, Ewing, NJ (United States)

    1995-12-31

    Exposure assessments for development of risk-based soil cleanup standards or criteria assume that contaminant mass in soil is infinite and conservative (constant concentration). This assumption is not real for most organic chemicals. Contaminant mass is lost from soil and ground water when organic chemicals degrade. Factors to correct for chemical mass lost by degradation are derived from first-order kinetics for 85 organic chemicals commonly listed by USEPA and state agencies. Soil cleanup criteria, based on constant concentration, are then corrected for contaminant mass lost. For many chemicals, accounting for mass lost yields large correction factors to risk-based soil concentrations. For degradation in ground water and soil, correction factors range from greater than one to several orders of magnitude. The long exposure durations normally used in exposure assessments (25 to 70 years) result in large correction factors to standards even for carcinogenic chemicals with long half-lives. For the ground water pathway, a typical soil criterion for TCE of 1 mg/kg would be corrected to 11 mg/kg. For noncarcinogens, correcting for mass lost means that risk algorithms used to set soil cleanup requirements are inapplicable for many chemicals, especially for long periods of exposure.

  15. Testing the Minimal Repair Assumption in an Imperfect Repair Model

    Science.gov (United States)

    1991-09-01

    generator described in Marsaglia , Tsang, and Zaman (1990). Exponential random variables were generated using the function REXP given by Marsaglia and...gamma variates until the current record is exceeded. These gamma random variables were generated using the squeeze method of Marsaglia (1977). Once this...described by Marsaglia and Tsang (1984) f,r generating random variables from the tail of a distribution was employed. The sample sizes examined were 10, 20

  16. Are joint torque models limited by an assumption of monoarticularity?

    Science.gov (United States)

    Lewis, Martin G C; King, Mark A; Yeadon, Maurice R; Conceição, Filipe

    2012-11-01

    This study determines whether maximal voluntary ankle plantar flexor torque could be more accurately represented using a torque generator that is a function of both knee and ankle kinematics. Isovelocity and isometric ankle plantar flexor torques were measured on a single participant for knee joint angles of 111° to 169° (approximately full extension) using a Contrex MJ dynamometer. Maximal voluntary torque was represented by a 19-parameter two-joint function of ankle and knee joint angles and angular velocities with the parameters determined by minimizing a weighted root mean square difference between measured torques and the two-joint function. The weighted root mean square difference between the two-joint function and the measured torques was 10 N-m or 3% of maximum torque. The two-joint function was a more accurate representation of maximal voluntary ankle plantar flexor torques than an existing single-joint function where differences of 19% of maximum torque were found. It is concluded that when the knee is flexed by more than 40°, a two-joint representation is necessary.

  17. COMPETITION VERSUS COLLUSION: THE PARALLEL BEHAVIOUR IN THE ABSENCE OF THE SYMETRY ASSUMPTION

    Directory of Open Access Journals (Sweden)

    Romano Oana Maria

    2012-07-01

    Full Text Available Cartel detection is usually viewed as a key task of competition authorities. A special case of cartel is the parallel behaviour in terms of price selling. This type of behaviour is difficult to assess and its analysis has not always conclusive results. For evaluating such behaviour the data available are compared with theoretical values obtained by using a competitive or a collusive model. When different competitive or collusive models are considered, for the simplicity of calculations the economists use the symmetry assumption of costs and quantities produced / sold. This assumption has the disadvantage that the theoretical values obtained may deviate significantly from actual values (the real values on the market, which can sometimes lead to ambiguous results. The present paper analyses the parallel behaviour of economic agents in the absence of the symmetry assumption and study the identification of the model in this conditions.

  18. Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.

    Science.gov (United States)

    Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A

    2017-04-01

    The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Density assumptions for converting geodetic glacier volume change to mass change

    Directory of Open Access Journals (Sweden)

    M. Huss

    2013-05-01

    Full Text Available The geodetic method is widely used for assessing changes in the mass balance of mountain glaciers. However, comparison of repeated digital elevation models only provides a glacier volume change that must be converted to a change in mass using a density assumption or model. This study investigates the use of a constant factor for the volume-to-mass conversion based on a firn compaction model applied to simplified glacier geometries with idealized climate forcing, and two glaciers with long-term mass balance series. It is shown that the "density" of geodetic volume change is not a constant factor and is systematically smaller than ice density in most cases. This is explained by the accretion/removal of low-density firn layers, and changes in the firn density profile with positive/negative mass balance. Assuming a value of 850 ± 60 kg m−3 to convert volume change to mass change is appropriate for a wide range of conditions. For short time intervals (≤3 yr, periods with limited volume change, and/or changing mass balance gradients, the conversion factor can however vary from 0–2000 kg m−3 and beyond, which requires caution when interpreting glacier mass changes based on geodetic surveys.

  20. Model-Based Requirements Analysis for Reactive Systems with UML Sequence Diagrams and Coloured Petri Nets

    DEFF Research Database (Denmark)

    Tjell, Simon; Lassen, Kristian Bisgaard

    2008-01-01

    In this paper, we describe a formal foundation for a specialized approach to automatically checking traces against real-time requirements. The traces are obtained from simulation of Coloured Petri Net (CPN) models of reactive systems. The real-time requirements are expressed in terms...... of a derivative of UML 2.0 high-level Sequence Diagrams. The automated requirement checking is part of a bigger tool framework in which VDM++ is applied to automatically generate initial CPN models based on Problem Diagrams. These models are manually enhanced to provide behavioral descriptions of the environment...

  1. Model requirements for decision support under uncertainty in data scarce dynamic deltas

    NARCIS (Netherlands)

    Haasnoot, Marjolijn; van Deursen, W.P.A.; Kwakkel, J. H.; Middelkoop, H.

    2016-01-01

    There is a long tradition of model-based decision support in water management. The consideration of deep uncertainty, however, changes the requirements imposed on models.. In the face of deep uncertainty, models are used to explore many uncertainties and the decision space across multiple outcomes o

  2. Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.

    2003-01-01

    . In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling...

  3. Guideline for Adopting the Local Reaction Assumption for Porous Absorbers in Terms of Random Incidence Absorption Coefficients

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2011-01-01

    errors of less than 10% if the thickness exceeds 120 mm for a flow resistivity of 5000 Nm-4s. As the flow resistivity doubles, a decrease in the required thickness by 25 mm is observed to achieve the same amount of error. For an absorber backed by an air gap, the thickness ratio between the material...... and air cavity is important, since the thicker the cavity, the more extendedly reacting the absorber. If the absorber thickness is approximately 40% of the cavity depth, the local reaction models give errors below 10% even for a low flow resistivity case....... resistivity and the absorber thickness on the difference between the two surface reaction models are examined and discussed. For a porous absorber backed by a rigid surface, the assumption of local reaction always underestimates the random incidence absorption coefficient and the local reaction models give...

  4. Providing security assurance in line with national DBT assumptions

    Science.gov (United States)

    Bajramovic, Edita; Gupta, Deeksha

    2017-01-01

    As worldwide energy requirements are increasing simultaneously with climate change and energy security considerations, States are thinking about building nuclear power to fulfill their electricity requirements and decrease their dependence on carbon fuels. New nuclear power plants (NPPs) must have comprehensive cybersecurity measures integrated into their design, structure, and processes. In the absence of effective cybersecurity measures, the impact of nuclear security incidents can be severe. Some of the current nuclear facilities were not specifically designed and constructed to deal with the new threats, including targeted cyberattacks. Thus, newcomer countries must consider the Design Basis Threat (DBT) as one of the security fundamentals during design of physical and cyber protection systems of nuclear facilities. IAEA NSS 10 describes the DBT as "comprehensive description of the motivation, intentions and capabilities of potential adversaries against which protection systems are designed and evaluated". Nowadays, many threat actors, including hacktivists, insider threat, cyber criminals, state and non-state groups (terrorists) pose security risks to nuclear facilities. Threat assumptions are made on a national level. Consequently, threat assessment closely affects the design structures of nuclear facilities. Some of the recent security incidents e.g. Stuxnet worm (Advanced Persistent Threat) and theft of sensitive information in South Korea Nuclear Power Plant (Insider Threat) have shown that these attacks should be considered as the top threat to nuclear facilities. Therefore, the cybersecurity context is essential for secure and safe use of nuclear power. In addition, States should include multiple DBT scenarios in order to protect various target materials, types of facilities, and adversary objectives. Development of a comprehensive DBT is a precondition for the establishment and further improvement of domestic state nuclear-related regulations in the

  5. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    Science.gov (United States)

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  6. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    Energy Technology Data Exchange (ETDEWEB)

    R.E. Sweeney

    2001-02-08

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance.

  7. Modeling and verifying SoS performance requirements of C4ISR systems

    Institute of Scientific and Technical Information of China (English)

    Yudong Qi; Zhixue Wang; Qingchao Dong; Hongyue He

    2015-01-01

    System-of-systems (SoS) engineering involves a com-plex process of refining high-level SoS requirements into more detailed systems requirements and assessing the extent to which the performances of to-be systems may possibly satisfy SoS capa-bility objectives. The key issue is how to model such requirements to automate the process of analysis and assessment. This paper suggests a meta-model that defines both functional and non-functional features of SoS requirements for command and control, communication, computer, intel igence, surveil ance reconnais-sance (C4ISR) systems. A domain-specific modeling language is defined by extending unified modeling language (UML) con-structed of class and association with fuzzy theory in order to model the fuzzy concepts of performance requirements. An effi-ciency evaluation function is introduced, based on B´ezier curves, to predict the effectiveness of systems. An algorithm is presented to transform domain models in fuzzy UML into a requirements ontology in description logic (DL) so that requirements verification can be automated with a popular DL reasoner such as Pel et.

  8. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  9. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  10. Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions

    Science.gov (United States)

    Wang, XiaoCong; Liu, YiMin; Bao, Qing

    2016-01-01

    Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and

  11. Requirements Evolution Processes Modeling%需求演化过程建模

    Institute of Scientific and Technical Information of China (English)

    张国生

    2012-01-01

    Requirements tasks, requirements activities, requirements engineering processes and requirements engineering processes system are formally defined. Requirements tasks are measured with information entropy. Requirements activities, requirements engineering processes and requirements engineering processes system are measured with joint entropy. From point of view of requirements engineering processes, microcosmic evolution of iteration and feedback of the requirements engineering processes are modeled with condition-event nets. From point of view of system engineering, macro evolution of the whole software requirements engineering processes system is modeled with dissipative structure theory.%对需求任务、需求活动、需求工程过程以及需求工程过程系统进行形式化定义.用信息熵对需求任务演化进行度量,用联合熵对需求活动、需求工程过程以及需求工程过程系统演化进行度量.从需求工程过程的角度,用条件一事件网对需求工程过程的迭代、反馈进行微观演化建模.从系统工程的角度,用耗散结构理论对整个软件需求工程过程系统进行宏观演化建模.

  12. Performance Requirements Modeling andAssessment for Active Power Ancillary Services

    DEFF Research Database (Denmark)

    Bondy, Daniel Esteban Morales; Thavlov, Anders; Tougaard, Janus Bundsgaard Mosbæk

    2017-01-01

    New sources of ancillary services are expected in the power system. For large and conventional generation units the dynamic response is well understood and detailed individual measurement is feasible, which factors in to the straightforward performance requirements applied today. For secure power...... ancillary service sources. This paper develops a modeling method for ancillary services performance requirements, including performance and verification indices. The use of the modeling method and the indices is exemplified in two case studies.......New sources of ancillary services are expected in the power system. For large and conventional generation units the dynamic response is well understood and detailed individual measurement is feasible, which factors in to the straightforward performance requirements applied today. For secure power...... system operation, a reliable service delivery is required, yet it may not be appropriate to apply conventional performance requirements to new technologies and methods. The service performance requirements and assessment methods therefore need to be generalized and standardized in order to include future...

  13. Assumptions and Axioms: Mathematical Structures to Describe the Physics of Rigid Bodies

    CERN Document Server

    Butler, Philip H; Renaud, Peter F

    2010-01-01

    This paper challenges some of the common assumptions underlying the mathematics used to describe the physical world. We start by reviewing many of the assumptions underlying the concepts of real, physical, rigid bodies and the translational and rotational properties of such rigid bodies. Nearly all elementary and advanced texts make physical assumptions that are subtly different from ours, and as a result we develop a mathematical description that is subtly different from the standard mathematical structure. Using the homogeneity and isotropy of space, we investigate the translational and rotational features of rigid bodies in two and three dimensions. We find that the concept of rigid bodies and the concept of the homogeneity of space are intrinsically linked. The geometric study of rotations of rigid objects leads to a geometric product relationship for lines and vectors. By requiring this product to be both associative and to satisfy Pythagoras' theorem, we obtain a choice of Clifford algebras. We extend o...

  14. Supporting calculations and assumptions for use in WESF safetyanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  15. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases...... literature, and few contributions represent the three remaining discourses, which unjustifiably leaves out issues that research could and most probably should investigate. In order to highlight research potentials, limitations, and underlying assumptions of each discourse, we develop four IT PPM metaphors...

  16. Different Random Distributions Research on Logistic-Based Sample Assumption

    Directory of Open Access Journals (Sweden)

    Jing Pan

    2014-01-01

    Full Text Available Logistic-based sample assumption is proposed in this paper, with a research on different random distributions through this system. It provides an assumption system of logistic-based sample, including its sample space structure. Moreover, the influence of different random distributions for inputs has been studied through this logistic-based sample assumption system. In this paper, three different random distributions (normal distribution, uniform distribution, and beta distribution are used for test. The experimental simulations illustrate the relationship between inputs and outputs under different random distributions. Thereafter, numerical analysis infers that the distribution of outputs depends on that of inputs to some extent, and this assumption system is not independent increment process, but it is quasistationary.

  17. Fraud Risk Modelling: Requirements Elicitation in the Case of Telecom Services

    DEFF Research Database (Denmark)

    Yesuf, Ahmed; Wolos, Lars Peter; Rannenberg, Kai

    2017-01-01

    . In this paper, we highlight the important requirements for a usable and context-aware fraud risk modelling approach for Telecom services. To do so, we have conducted two workshops with experts from a Telecom provider and experts from multi-disciplinary areas. In order to show and document the requirements, we...

  18. The Nuremberg Code subverts human health and safety by requiring animal modeling

    OpenAIRE

    Greek Ray; Pippus Annalea; Hansen Lawrence A

    2012-01-01

    Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive...

  19. Finite Element Simulations to Explore Assumptions in Kolsky Bar Experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Crum, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-05

    The chief purpose of this project has been to develop a set of finite element models that attempt to explore some of the assumptions in the experimental set-up and data reduction of the Kolsky bar experiment. In brief, the Kolsky bar, sometimes referred to as the split Hopkinson pressure bar, is an experimental apparatus used to study the mechanical properties of materials at high strain rates. Kolsky bars can be constructed to conduct experiments in tension or compression, both of which are studied in this paper. The basic operation of the tension Kolsky bar is as follows: compressed air is inserted into the barrel that contains the striker; the striker accelerates towards the left and strikes the left end of the barrel producing a tensile stress wave that propogates first through the barrel and then down the incident bar, into the specimen, and finally the transmission bar. In the compression case, the striker instead travels to the right and impacts the incident bar directly. As the stress wave travels through an interface (e.g., the incident bar to specimen connection), a portion of the pulse is transmitted and the rest reflected. The incident pulse, as well as the transmitted and reflected pulses are picked up by two strain gauges installed on the incident and transmitted bars as shown. By interpreting the data acquired by these strain gauges, the stress/strain behavior of the specimen can be determined.

  20. A Hybrid Parallel Execution Model for Logic Based Requirement Specifications (Invited Paper

    Directory of Open Access Journals (Sweden)

    Jeffrey J. P. Tsai

    1999-05-01

    Full Text Available It is well known that undiscovered errors in a requirements specification is extremely expensive to be fixed when discovered in the software maintenance phase. Errors in the requirement phase can be reduced through the validation and verification of the requirements specification. Many logic-based requirements specification languages have been developed to achieve these goals. However, the execution and reasoning of a logic-based requirements specification can be very slow. An effective way to improve their performance is to execute and reason the logic-based requirements specification in parallel. In this paper, we present a hybrid model to facilitate the parallel execution of a logic-based requirements specification language. A logic-based specification is first applied by a data dependency analysis technique which can find all the mode combinations that exist within a specification clause. This mode information is used to support a novel hybrid parallel execution model, which combines both top-down and bottom-up evaluation strategies. This new execution model can find the failure in the deepest node of the search tree at the early stage of the evaluation, thus this new execution model can reduce the total number of nodes searched in the tree, the total processes needed to be generated, and the total communication channels needed in the search process. A simulator has been implemented to analyze the execution behavior of the new model. Experiments show significant improvement based on several criteria.

  1. Virtual Community Life Cycle: a Model to Develop Systems with Fluid Requirements

    OpenAIRE

    El Morr, Christo; Maret, Pierre de; Rioux, Marcia; Dinca-Panaitescu, Mihaela; Subercaze, Julien

    2011-01-01

    This paper reports the results of an investigation into the life cycle model needed to develop information systems for group of people with fluid requirements. For this purpose, we developed a modified spiral model and applied to the analysis, design and implementation of a virtual community for a group of researchers and organizations that collaborated in a research project and had changing system requirements? The virtual knowledge community was dedicated to support mobilization and dissemi...

  2. Tails assumptions and posterior concentration rates for mixtures of Gaussians

    OpenAIRE

    Naulet, Zacharie; Rousseau, Judith

    2016-01-01

    Nowadays in density estimation, posterior rates of convergence for location and location-scale mixtures of Gaussians are only known under light-tail assumptions; with better rates achieved by location mixtures. It is conjectured, but not proved, that the situation should be reversed under heavy tails assumptions. The conjecture is based on the feeling that there is no need to achieve a good order of approximation in regions with few data (say, in the tails), favoring location-scale mixtures w...

  3. US Intervention in Failed States: Bad Assumptions=Poor Outcomes

    Science.gov (United States)

    2002-01-01

    NATIONAL DEFENSE UNIVERSITY NATIONAL WAR COLLEGE STRATEGIC LOGIC ESSAY US INTERVENTION IN FAILED STATES: BAD ASSUMPTIONS = POOR ...2002 2. REPORT TYPE 3. DATES COVERED 00-00-2002 to 00-00-2002 4. TITLE AND SUBTITLE US Intervention in Failed States: Bad Assumptions= Poor ...country remains in the grip of poverty , natural disasters, and stagnation. Rwanda Rwanda, another small African country, is populated principally

  4. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation Modelling

    Science.gov (United States)

    2009-10-01

    Beattie - Bridgeman Virial expansion The above equations are suitable for moderate pressures and are usually based on either empirical constants...CR 2010-013 October 2009 A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation...Defence R&D Canada. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation

  5. Density assumptions for converting geodetic glacier volume change to mass change

    Directory of Open Access Journals (Sweden)

    M. Huss

    2013-01-01

    Full Text Available The geodetic method is widely used for assessing changes in the mass balance of mountain glaciers. However, comparison of repeated digital elevation models only provides a glacier volume change that must be converted to a change in mass using a density assumption. This study investigates this conversion factor based on a firn compaction model applied to simplified glacier geometries with idealized climate forcing, and two glaciers with long-term mass balance series. It is shown that the "density" of geodetic volume change is not a constant factor and is systematically smaller than ice density in most cases. This is explained by the accretion/removal of low-density firn layers, and changes in the firn density profile with positive/negative mass balance. Assuming a value of 850 ± 60 kg m−3 to convert volume change to mass change is appropriate for a wide range of conditions. For short time intervals (≤3 yr, periods with limited volume change, and/or changing mass balance gradients, the conversion factor can however vary from 0–2000 kg m−3 and beyond which requires caution when interpreting glacier mass changes based on geodetic surveys.

  6. Estimating Alarm Thresholds for Process Monitoring Data under Different Assumptions about the Data Generating Mechanism

    Directory of Open Access Journals (Sweden)

    Tom Burr

    2013-01-01

    Full Text Available Process monitoring (PM for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals. Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.

  7. Controversies in psychotherapy research: epistemic differences in assumptions about human psychology.

    Science.gov (United States)

    Shean, Glenn D

    2013-01-01

    It is the thesis of this paper that differences in philosophical assumptions about the subject matter and treatment methods of psychotherapy have contributed to disagreements about the external validity of empirically supported therapies (ESTs). These differences are evident in the theories that are the basis for both the design and interpretation of recent psychotherapy efficacy studies. The natural science model, as applied to psychotherapy outcome research, transforms the constitutive features of the study subject in a reciprocal manner so that problems, treatments, and indicators of effectiveness are limited to what can be directly observed. Meaning-based approaches to therapy emphasize processes and changes that do not lend themselves to experimental study. Hermeneutic philosophy provides a supplemental model to establishing validity in those instances where outcome indicators do not lend themselves to direct observation and measurement and require "deep" interpretation. Hermeneutics allows for a broadening of psychological study that allows one to establish a form of validity that is applicable when constructs do not refer to things that literally "exist" in nature. From a hermeneutic perspective the changes that occur in meaning-based therapies must be understood and evaluated on the manner in which they are applied to new situations, the logical ordering and harmony of the parts with the theoretical whole, and the capability of convincing experts and patients that the interpretation can stand up against other ways of understanding. Adoption of this approach often is necessary to competently evaluate the effectiveness of meaning-based therapies.

  8. Structural model requirements to describe microbial inactivation during a mild heat treatment.

    Science.gov (United States)

    Geeraerd, A H; Herremans, C H; Van Impe, J F

    2000-09-10

    The classical concept of D and z values, established for sterilisation processes, is unable to deal with the typical non-loglinear behaviour of survivor curves occurring during the mild heat treatment of sous vide or cook-chill food products. Structural model requirements are formulated, eliminating immediately some candidate model types. Promising modelling approaches are thoroughly analysed and, if applicable, adapted to the specific needs: two models developed by Casolari (1988), the inactivation model of Sapru et al. (1992), the model of Whiting (1993), the Baranyi and Roberts growth model (1994), the model of Chiruta et al. (1997), the model of Daughtry et al. (1997) and the model of Xiong et al. (1999). A range of experimental data of Bacillus cereus, Yersinia enterocolitica, Escherichia coli O157:H7, Listeria monocytogenes and Lactobacillus sake are used to illustrate the different models' performances. Moreover, a novel modelling approach is developed, fulfilling all formulated structural model requirements, and based on a careful analysis of literature knowledge of the shoulder and tailing phenomenon. Although a thorough insight in the occurrence of shoulders and tails is still lacking from a biochemical point of view, this newly developed model incorporates the possibility of a straightforward interpretation within this framework.

  9. Capital Regulation, Liquidity Requirements and Taxation in a Dynamic Model of Banking

    NARCIS (Netherlands)

    Di Nicolo, G.; Gamba, A.; Lucchetta, M.

    2011-01-01

    This paper formulates a dynamic model of a bank exposed to both credit and liquidity risk, which can resolve financial distress in three costly forms: fire sales, bond issuance ad equity issuance. We use the model to analyze the impact of capital regulation, liquidity requirements and taxation on ba

  10. Capital Regulation, Liquidity Requirements and Taxation in a Dynamic Model of Banking

    NARCIS (Netherlands)

    Di Nicolo, G.; Gamba, A.; Lucchetta, M.

    2011-01-01

    This paper formulates a dynamic model of a bank exposed to both credit and liquidity risk, which can resolve financial distress in three costly forms: fire sales, bond issuance and equity issuance. We use the model to analyze the impact of capital regulation, liquidity requirements and taxation on b

  11. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    Science.gov (United States)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  12. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    Science.gov (United States)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  13. Workshop on Functional Requirements for the Modeling of Fate and Transport of Waterborne CBRN Materials

    Energy Technology Data Exchange (ETDEWEB)

    Giles, GE

    2005-02-03

    The purpose of this Workshop on ''Functional Requirements for the Modeling of Fate and Transport of Waterborne CBRN Materials'' was to solicit functional requirements for tools that help Incident Managers plan for and deal with the consequences of industrial or terrorist releases of materials into the nation's waterways and public water utilities. Twenty representatives attended and several made presentations. Several hours of discussions elicited a set of requirements. These requirements were summarized in a form for the attendees to vote on their highest priority requirements. These votes were used to determine the prioritized requirements that are reported in this paper and can be used to direct future developments.

  14. Wastewater treatment models in teaching and training: the mismatch between education and requirements for jobs.

    Science.gov (United States)

    Hug, Thomas; Benedetti, Lorenzo; Hall, Eric R; Johnson, Bruce R; Morgenroth, Eberhard; Nopens, Ingmar; Rieger, Leiv; Shaw, Andrew; Vanrolleghem, Peter A

    2009-01-01

    As mathematical modeling of wastewater treatment plants has become more common in research and consultancy, a mismatch between education and requirements for model-related jobs has developed. There seems to be a shortage of skilled people, both in terms of quantity and in quality. In order to address this problem, this paper provides a framework to outline different types of model-related jobs, assess the required skills for these jobs and characterize different types of education that modelers obtain "in school" as well as "on the job". It is important to consider that education of modelers does not mainly happen in university courses and that the variety of model related jobs goes far beyond use for process design by consulting companies. To resolve the mismatch, the current connection between requirements for different jobs and the various types of education has to be assessed for different geographical regions and professional environments. This allows the evaluation and improvement of important educational paths, considering quality assurance and future developments. Moreover, conclusions from a workshop involving practitioners and academics from North America and Europe are presented. The participants stressed the importance of non-technical skills and recommended strengthening the role of realistic modeling experience in university training. However, this paper suggests that all providers of modeling education and support, not only universities, but also software suppliers, professional associations and companies performing modeling tasks are called to assess and strengthen their role in training and support of professional modelers.

  15. 76 FR 22925 - Assumption Buster Workshop: Abnormal Behavior Detection Finds Malicious Actors

    Science.gov (United States)

    2011-04-25

    ... Assumption Buster Workshop: Abnormal Behavior Detection Finds Malicious Actors AGENCY: The National... assumptionbusters@nitrd.gov . Travel expenses will be paid at the government rate for selected participants who live... behavioral models to monitor the size and destinations of financial transfers, and/or on-line...

  16. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    Science.gov (United States)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of

  17. cBrother: relaxing parental tree assumptions for Bayesian recombination detection.

    Science.gov (United States)

    Fang, Fang; Ding, Jing; Minin, Vladimir N; Suchard, Marc A; Dorman, Karin S

    2007-02-15

    Bayesian multiple change-point models accurately detect recombination in molecular sequence data. Previous Java-based implementations assume a fixed topology for the representative parental data. cBrother is a novel C language implementation that capitalizes on reduced computational time to relax the fixed tree assumption. We show that cBrother is 19 times faster than its predecessor and the fixed tree assumption can influence estimates of recombination in a medically-relevant dataset. cBrother can be freely downloaded from http://www.biomath.org/dormanks/ and can be compiled on Linux, Macintosh and Windows operating systems. Online documentation and a tutorial are also available at the site.

  18. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Science.gov (United States)

    2012-01-01

    Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234

  19. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-07-01

    Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  20. Are nest sites actively chosen? Testing a common assumption for three non-resource limited birds

    Science.gov (United States)

    Goodenough, A. E.; Elliot, S. L.; Hart, A. G.

    2009-09-01

    Many widely-accepted ecological concepts are simplified assumptions about complex situations that remain largely untested. One example is the assumption that nest-building species choose nest sites actively when they are not resource limited. This assumption has seen little direct empirical testing: most studies on nest-site selection simply assume that sites are chosen actively (and seek explanations for such behaviour) without considering that sites may be selected randomly. We used 15 years of data from a nestbox scheme in the UK to test the assumption of active nest-site choice in three cavity-nesting bird species that differ in breeding and migratory strategy: blue tit ( Cyanistes caeruleus), great tit ( Parus major) and pied flycatcher ( Ficedula hypoleuca). Nest-site selection was non-random (implying active nest-site choice) for blue and great tits, but not for pied flycatchers. We also considered the relative importance of year-specific and site-specific factors in determining occupation of nest sites. Site-specific factors were more important than year-specific factors for the tit species, while the reverse was true for pied flycatchers. Our results show that nest-site selection, in birds at least, is not always the result of active choice, such that choice should not be assumed automatically in studies of nesting behaviour. We use this example to highlight the need to test key ecological assumptions empirically, and the importance of doing so across taxa rather than for single "model" species.

  1. Error in the description of foot kinematics due to violation of rigid body assumptions.

    Science.gov (United States)

    Nester, C J; Liu, A M; Ward, E; Howard, D; Cocheba, J; Derrick, T

    2010-03-03

    Kinematic data from rigid segment foot models inevitably includes errors because the bones within each segment move relative to each other. This study sought to define error in foot kinematic data due to violation of the rigid segment assumption. The research compared kinematic data from 17 different mid and forefoot rigid segment models to kinematic data of the individual bones comprising these segments. Kinematic data from a previous dynamic cadaver model study was used to derive individual bone as well as foot segment kinematics. Mean and maximum errors due to violation of the rigid body assumption varied greatly between models. The model with least error was the combination of navicular and cuboid (mean errors kinematics research study being undertaken.

  2. Changing Assumptions and Progressive Change in Theories of Strategic Organization

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Hallberg, Niklas L.

    2016-01-01

    are often decoupled from the results of empirical testing, changes in assumptions seem closely intertwined with theoretical progress. Using the case of the resource-based view, we suggest that progressive change in theories of strategic organization may come about as a result of scholarly debate and dispute...... over what constitutes proper assumptions—even in the absence of corroborating or falsifying empirical evidence. We also discuss how changing assumptions may drive future progress in the resource-based view.......A commonly held view is that strategic organization theories progress as a result of a Popperian process of bold conjectures and systematic refutations. However, our field also witnesses vibrant debates or disputes about the specific assumptions that our theories rely on, and although these debates...

  3. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases......: (1) IT PPM as the top management marketplace, (2) IT PPM as the cause of social dilemmas at the lower organizational levels (3) IT PPM as polity between different organizational interests, (4) IT PPM as power relations that suppress creativity and diversity. Our metaphors can be used by practitioners...

  4. An emergency dispatch model considering the urgency of the requirement for reliefs in different disaster areas

    Directory of Open Access Journals (Sweden)

    Liu Sheng

    2015-11-01

    Full Text Available Abstract: Purpose: Frequent sudden-onset disasters which have threatened the survival of human and the development of society force the public to pay an increasing attention to emergency management. A challenging task in the process of emergency management is emergency dispatch of reliefs. An emergency dispatch model considering the urgency of the requirement for reliefs in different disaster areas is proposed in this paper to dispatch reliefs reasonably and reduce the effect of sudden-onset disasters. Design/methodology/approach: Firstly, quantitative assessment on the urgency of the requirement for reliefs in different disaster areas is done by an evaluation method based on Fuzzy Comprehensive Evaluation and improved Evidence Reasoning which is proposed in this paper. And then based the quantitative results, an emergency dispatch model aiming to minimize the response time, the distribution cost and the unsatisfied rate of the requirement for reliefs is proposed, which reflects the requests of disaster areas under emergency, including the urgency of requirement, the economy of distribution and the equity of allocation. Finally, the Genetic Algorithm is improved based on the adaptive crossover and mutation probability function to solve the emergency dispatch model. Findings and Originality/value: A case that the Y hydraulic power enterprise carries on emergency dispatch of reliefs under continuous sudden-onset heavy rain is given to illustrate the availability of the emergency dispatch model proposed in this paper. The results show that the emergency dispatch model meets the distribution priority requirement of disaster area with the higher urgency, so thatreliefs are supplied more timely. Research limitations/implications: The emergency dispatch model faced to large scale sudden-onset disasters is complex. The quantity of reliefs that disaster area requires and the running time of vehicles are viewed as available information, and the problem

  5. Modeling of Car-Following Required Safe Distance Based on Molecular Dynamics

    OpenAIRE

    Dayi Qu; Xiufeng Chen; Wansan Yang; Xiaohua Bian

    2014-01-01

    In car-following procedure, some distances are reserved between the vehicles, through which drivers can avoid collisions with vehicles before and after them in the same lane and keep a reasonable clearance with lateral vehicles. This paper investigates characters of vehicle operating safety in car following state based on required safe distance. To tackle this problem, we probe into required safe distance and car-following model using molecular dynamics, covering longitudinal and lateral safe...

  6. Computer software requirements specification for the world model light duty utility arm system

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.E.

    1996-02-01

    This Computer Software Requirements Specification defines the software requirements for the world model of the Light Duty Utility Arm (LDUA) System. It is intended to be used to guide the design of the application software, to be a basis for assessing the application software design, and to establish what is to be tested in the finished application software product. (This deploys end effectors into underground storage tanks by means of robotic arm on end of telescoping mast.)

  7. IDENTIFYING OPERATIONAL REQUIREMENTS TO SELECT SUITABLE DECISION MODELS FOR A PUBLIC SECTOR EPROCUREMENT DECISION SUPPORT SYSTEM

    Directory of Open Access Journals (Sweden)

    Mohamed Adil

    2014-10-01

    Full Text Available Public sector procurement should be a transparent and fair process. Strict legal requirements are enforced on public sector procurement to make it a standardised process. To make fair decisions on selecting suppliers, a practical method which adheres to legal requirements is important. The research that is the base for this paper aimed at identifying a suitable Multi-Criteria Decision Analysis (MCDA method for the specific legal and functional needs of the Maldivian Public Sector. To identify such operational requirements, a set of focus group interviews were conducted in the Maldives with public officials responsible for procurement decision making. Based on the operational requirements identified through focus groups, criteria-based evaluation is done on published MCDA methods to identify the suitable methods for e-procurement decision making. This paper describes the identification of the operational requirements and the results of the evaluation to select suitable decision models for the Maldivian context.

  8. On Early Conflict Identification by Requirements Modeling of Energy System Control Structures

    DEFF Research Database (Denmark)

    Heussen, Kai; Gehrke, Oliver; Niemann, Hans Henrik

    2015-01-01

    Control systems are purposeful systems involving goal-oriented information processing (cyber) and technical (physical) structures. Requirements modeling formalizes fundamental concepts and relations of a system architecture at a high-level design stage and can be used to identify potential design...... at later design stages. However, languages employed for requirements modeling today do not offer the expressiveness necessary to represent control purposes in relation to domain level interactions and therefore miss several types of interdependencies. This paper introduces the idea of control structure...

  9. Improved Traceability of a Small Satellite Mission Concept to Requirements Using Model Based System Engineering

    Science.gov (United States)

    Reil, Robin L.

    2014-01-01

    Model Based Systems Engineering (MBSE) has recently been gaining significant support as a means to improve the "traditional" document-based systems engineering (DBSE) approach to engineering complex systems. In the spacecraft design domain, there are many perceived and propose benefits of an MBSE approach, but little analysis has been presented to determine the tangible benefits of such an approach (e.g. time and cost saved, increased product quality). This paper presents direct examples of how developing a small satellite system model can improve traceability of the mission concept to its requirements. A comparison of the processes and approaches for MBSE and DBSE is made using the NASA Ames Research Center SporeSat CubeSat mission as a case study. A model of the SporeSat mission is built using the Systems Modeling Language standard and No Magic's MagicDraw modeling tool. The model incorporates mission concept and requirement information from the mission's original DBSE design efforts. Active dependency relationships are modeled to demonstrate the completeness and consistency of the requirements to the mission concept. Anecdotal information and process-duration metrics are presented for both the MBSE and original DBSE design efforts of SporeSat.

  10. Improved Traceability of Mission Concept to Requirements Using Model Based Systems Engineering

    Science.gov (United States)

    Reil, Robin

    2014-01-01

    Model Based Systems Engineering (MBSE) has recently been gaining significant support as a means to improve the traditional document-based systems engineering (DBSE) approach to engineering complex systems. In the spacecraft design domain, there are many perceived and propose benefits of an MBSE approach, but little analysis has been presented to determine the tangible benefits of such an approach (e.g. time and cost saved, increased product quality). This thesis presents direct examples of how developing a small satellite system model can improve traceability of the mission concept to its requirements. A comparison of the processes and approaches for MBSE and DBSE is made using the NASA Ames Research Center SporeSat CubeSat mission as a case study. A model of the SporeSat mission is built using the Systems Modeling Language standard and No Magics MagicDraw modeling tool. The model incorporates mission concept and requirement information from the missions original DBSE design efforts. Active dependency relationships are modeled to analyze the completeness and consistency of the requirements to the mission concept. Overall experience and methodology are presented for both the MBSE and original DBSE design efforts of SporeSat.

  11. Automatically multi-paradigm requirements modeling and analyzing: An ontology-based approach

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    There are several purposes for modeling and analyzing the problem domain before starting the software requirements analysis. First, it focuses on the problem domain, so that the domain users could be involved easily. Secondly, a comprehensive description on the problem domain will advantage getting a comprehensive software requirements model. This paper proposes an ontology-based approach for mod-eling the problem domain. It interacts with the domain users by using terminology that they can under-stand and guides them to provide the relevant information. A multiple paradigm analysis approach, with the basis of the description on the problem domain, has also been presented. Three criteria, i.e. the ra-tionality of organization structure, the achievability of organization goals, and the feasibility of organiza-tion process, have been proposed. The results of the analysis could be used as feedbacks for guiding the domain users to provide further information on the problem domain. And those models on the problem domain could be a kind of document for the pre-requirements analysis phase. They also will be the basis for further software requirements modeling.

  12. Fractured rock modeling in the National Waste Terminal Storage Program: a review of requirements and status

    Energy Technology Data Exchange (ETDEWEB)

    St. John, C.; Krug, A.; Key, S.; Monsees, J.

    1983-05-01

    Generalized computer codes capable of forming the basis for numerical models of fractured rock masses are being used within the NWTS program. Little additional development of these codes is considered justifiable, except in the area of representation of discrete fractures. On the other hand, model preparation requires definition of medium-specific constitutive descriptions and site characteristics and is therefore legitimately conducted by each of the media-oriented projects within the National Waste Terminal Storage program. However, it is essential that a uniform approach to the role of numerical modeling be adopted, including agreement upon the contribution of modeling to the design and licensing process and the need for, and means of, model qualification for particular purposes. This report discusses the role of numerical modeling, reviews the capabilities of several computer codes that are being used to support design or performance assessment, and proposes a framework for future numerical modeling activities within the NWTS program.

  13. A critical assessment of the ecological assumptions underpinning compensatory mitigation of salmon-derived nutrients

    Science.gov (United States)

    Collins, Scott F.; Marcarelli, Amy M.; Baxter, Colden V.; Wipfli, Mark S.

    2015-01-01

    We critically evaluate some of the key ecological assumptions underpinning the use of nutrient replacement as a means of recovering salmon populations and a range of other organisms thought to be linked to productive salmon runs. These assumptions include: (1) nutrient mitigation mimics the ecological roles of salmon, (2) mitigation is needed to replace salmon-derived nutrients and stimulate primary and invertebrate production in streams, and (3) food resources in rearing habitats limit populations of salmon and resident fishes. First, we call into question assumption one because an array of evidence points to the multi-faceted role played by spawning salmon, including disturbance via redd-building, nutrient recycling by live fish, and consumption by terrestrial consumers. Second, we show that assumption two may require qualification based upon a more complete understanding of nutrient cycling and productivity in streams. Third, we evaluate the empirical evidence supporting food limitation of fish populations and conclude it has been only weakly tested. On the basis of this assessment, we urge caution in the application of nutrient mitigation as a management tool. Although applications of nutrients and other materials intended to mitigate for lost or diminished runs of Pacific salmon may trigger ecological responses within treated ecosystems, contributions of these activities toward actual mitigation may be limited.

  14. 41 CFR 60-3.9 - No assumption of validity.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true No assumption of validity. 60-3.9 Section 60-3.9 Public Contracts and Property Management Other Provisions Relating to Public... 3-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.9...

  15. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    Science.gov (United States)

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  16. "Touch Me, Like Me": Testing an Encounter Group Assumption

    Science.gov (United States)

    Boderman, Alvin; And Others

    1972-01-01

    An experiment to test an encounter group assumption that touching increases interpersonal attraction was conducted. College women were randomly assigned to a touch or no-touch condition. A comparison of total evaluation scores verified the hypothesis: subjects who touched the accomplice perceived her as a more attractive person than those who did…

  17. The Metatheoretical Assumptions of Literacy Engagement: A Preliminary Centennial History

    Science.gov (United States)

    Hruby, George G.; Burns, Leslie D.; Botzakis, Stergios; Groenke, Susan L.; Hall, Leigh A.; Laughter, Judson; Allington, Richard L.

    2016-01-01

    In this review of literacy education research in North America over the past century, the authors examined the historical succession of theoretical frameworks on students' active participation in their own literacy learning, and in particular the metatheoretical assumptions that justify those frameworks. The authors used "motivation" and…

  18. Woman's Moral Development in Search of Philosophical Assumptions.

    Science.gov (United States)

    Sichel, Betty A.

    1985-01-01

    Examined is Carol Gilligan's thesis that men and women use different moral languages to resolve moral dilemmas, i.e., women speak a language of caring and responsibility, and men speak a language of rights and justice. Her thesis is not grounded with adequate philosophical assumptions. (Author/RM)

  19. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    Science.gov (United States)

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  20. Using Contemporary Art to Challenge Cultural Values, Beliefs, and Assumptions

    Science.gov (United States)

    Knight, Wanda B.

    2006-01-01

    Art educators, like many other educators born or socialized within the main-stream culture of a society, seldom have an opportunity to identify, question, and challenge their cultural values, beliefs, assumptions, and perspectives because school culture typically reinforces those they learn at home and in their communities (Bush & Simmons, 1990).…

  1. Making Predictions about Chemical Reactivity: Assumptions and Heuristics

    Science.gov (United States)

    Maeyer, Jenine; Talanquer, Vicente

    2013-01-01

    Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…

  2. Unpacking Assumptions in Research Synthesis: A Critical Construct Synthesis Approach

    Science.gov (United States)

    Wolgemuth, Jennifer R.; Hicks, Tyler; Agosto, Vonzell

    2017-01-01

    Research syntheses in education, particularly meta-analyses and best-evidence syntheses, identify evidence-based practices by combining findings across studies whose constructs are similar enough to warrant comparison. Yet constructs come preloaded with social, historical, political, and cultural assumptions that anticipate how research problems…

  3. Challenging Teachers' Pedagogic Practice and Assumptions about Social Media

    Science.gov (United States)

    Cartner, Helen C.; Hallas, Julia L.

    2017-01-01

    This article describes an innovative approach to professional development designed to challenge teachers' pedagogic practice and assumptions about educational technologies such as social media. Developing effective technology-related professional development for teachers can be a challenge for institutions and facilitators who provide this…

  4. Assumptions regarding right censoring in the presence of left truncation.

    Science.gov (United States)

    Qian, Jing; Betensky, Rebecca A

    2014-04-01

    Clinical studies using complex sampling often involve both truncation and censoring, where there are options for the assumptions of independence of censoring and event and for the relationship between censoring and truncation. In this paper, we clarify these choices, show certain equivalences, and provide examples.

  5. Using a DSGE Model to Assess the Macroeconomic Effects of Reserve Requirements in Brazil

    OpenAIRE

    Waldyr Dutra Areosa; Christiano Arrigoni Coelho

    2013-01-01

    The goal of this paper is to present how a Dynamic General Equilibrium Model (DSGE) can be used by policy makers in the qualitative and quantitative evaluation of the macroeconomics impacts of two monetary policy instruments: (i) short term interest rate and (ii) reserve requirements ratio. In our model, this last instrument affects the leverage of banks that have to deal with agency problems in order to raise funds from depositors. We estimated a modified version of Gertler and Karadi (2011)...

  6. A Formal Method to Model Early Requirement of Multi-Agent System

    Institute of Scientific and Technical Information of China (English)

    MAO Xin-jun; YU Eric

    2004-01-01

    A formal specification language iFL based on i* framework is presented in this paper to formally specify and analyze the early requirement of multi-agent system. It is a branching temporal logic which defines the concepts and models in i* framework in a rigorous way. The method to transform the i* models to iFL formal specification is also put forward.

  7. System Design Description and Requirements for Modeling the Off-Gas Systems for Fuel Recycling Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Daryl R. Haefner; Jack D. Law; Troy J. Tranter

    2010-08-01

    This document provides descriptions of the off-gases evolved during spent nuclear fuel processing and the systems used to capture the gases of concern. Two reprocessing techniques are discussed, namely aqueous separations and electrochemical (pyrochemical) processing. The unit operations associated with each process are described in enough detail so that computer models to mimic their behavior can be developed. The document also lists the general requirements for the desired computer models.

  8. Using a DSGE Model to Assess the Macroeconomic Effects of Reserve Requirements in Brazil

    OpenAIRE

    Waldyr Dutra Areosa; Christiano Arrigoni Coelho

    2013-01-01

    The goal of this paper is to present how a Dynamic General Equilibrium Model (DSGE) can be used by policy makers in the qualitative and quantitative evaluation of the macroeconomics impacts of two monetary policy instruments: (i) short term interest rate and (ii) reserve requirements ratio. In our model, this last instrument affects the leverage of banks that have to deal with agency problems in order to raise funds from depositors. We estimated a modified version of Gertler and Karadi (2011)...

  9. Quantum cryptography in real-life applications: Assumptions and security

    Science.gov (United States)

    Zhao, Yi

    Quantum cryptography, or quantum key distribution (QKD), provides a means of unconditionally secure communication. The security is in principle based on the fundamental laws of physics. Security proofs show that if quantum cryptography is appropriately implemented, even the most powerful eavesdropper cannot decrypt the message from a cipher. The implementations of quantum crypto-systems in real life may not fully comply with the assumptions made in the security proofs. Such discrepancy between the experiment and the theory can be fatal to the security of a QKD system. In this thesis we address a number of these discrepancies. A perfect single-photon source is often assumed in many security proofs. However, a weak coherent source is widely used in a real-life QKD implementation. Decoy state protocols have been proposed as a novel approach to dramatically improve the performance of a weak coherent source based QKD implementation without jeopardizing its security. Here, we present the first experimental demonstrations of decoy state protocols. Our experimental scheme was later adopted by most decoy state QKD implementations. In the security proof of decoy state protocols as well as many other QKD protocols, it is widely assumed that a sender generates a phase-randomized coherent state. This assumption has been enforced in few implementations. We close this gap in two steps: First, we implement and verify the phase randomization experimentally; second, we prove the security of a QKD implementation without the coherent state assumption. In many security proofs of QKD, it is assumed that all the detectors on the receiver's side have identical detection efficiencies. We show experimentally that this assumption may be violated in a commercial QKD implementation due to an eavesdropper's malicious manipulation. Moreover, we show that the eavesdropper can learn part of the final key shared by the legitimate users as a consequence of this violation of the assumptions.

  10. De novo actin polymerization is required for model Hirano body formation in Dictyostelium

    Directory of Open Access Journals (Sweden)

    Yun Dong

    2016-06-01

    Full Text Available Hirano bodies are eosinophilic, actin-rich inclusions found in autopsied brains in numerous neurodegenerative diseases. The mechanism of Hirano body formation is unknown. Mass spectrometry analysis was performed to identify proteins from partially purified model Hirano bodies from Dictyostelium. This analysis identified proteins primarily belonging to ribosomes, proteasomes, mitochondria and cytoskeleton. Profilin, Arp/2/3 and WASH identified by mass spectrometry were found to colocalise with model Hirano bodies. Due to their roles in actin regulation, we selected these proteins for further investigation. Inhibition of the Arp2/3 complex by CK666 prevented formation of model Hirano bodies. Since Arp2/3 activation occurs via the WASH or WAVE complex, we next investigated how these proteins affect Hirano body formation. Whereas model Hirano bodies could form in WASH-deficient cells, they failed to form in cells lacking HSPC300, a member of the WAVE complex. We identified other proteins required for Hirano body formation that include profilin and VASP, an actin nucleation factor. In the case of VASP, both its G- and F-actin binding domains were required for model Hirano body formation. Collectively, our results indicate that de novo actin polymerization is required to form model Hirano bodies.

  11. Understanding the relationship between Kano model's customer satisfaction scores and self-stated requirements importance.

    Science.gov (United States)

    Mkpojiogu, Emmanuel O C; Hashim, Nor Laily

    2016-01-01

    Customer satisfaction is the result of product quality and viability. The place of the perceived satisfaction of users/customers for a software product cannot be neglected especially in today competitive market environment as it drives the loyalty of customers and promotes high profitability and return on investment. Therefore understanding the importance of requirements as it is associated with the satisfaction of users/customers when their requirements are met is worth the pain considering. It is necessary to know the relationship between customer satisfactions when their requirements are met (or their dissatisfaction when their requirements are unmet) and the importance of such requirement. So many works have been carried out on customer satisfaction in connection with the importance of requirements but the relationship between customer satisfaction scores (coefficients) of the Kano model and users/customers self-stated requirements importance have not been sufficiently explored. In this study, an attempt is made to unravel the underlying relationship existing between Kano model's customer satisfaction indexes and users/customers self reported requirements importance. The results of the study indicate some interesting associations between these considered variables. These bivariate associations reveal that customer satisfaction index (SI), and average satisfaction coefficient (ASC) and customer dissatisfaction index (DI) and average satisfaction coefficient (ASC) are highly correlated (r = 96 %) and thus ASC can be used in place of either SI or DI in representing customer satisfaction scores. Also, these Kano model's customer satisfaction variables (SI, DI, and ASC) are each associated with self-stated requirements importance (IMP). Further analysis indicates that the value customers or users place on requirements that are met or on features that are incorporated into a product influences the level of satisfaction such customers derive from the product. The

  12. INTEGRATED DATA CAPTURING REQUIREMENTS FOR 3D SEMANTIC MODELLING OF CULTURAL HERITAGE: THE INCEPTION PROTOCOL

    Directory of Open Access Journals (Sweden)

    R. Di Giulio

    2017-02-01

    In order to face these challenges and to start solving the issue of the large amount of captured data and time-consuming processes in the production of 3D digital models, an Optimized Data Acquisition Protocol (DAP has been set up. The purpose is to guide the processes of digitization of cultural heritage, respecting needs, requirements and specificities of cultural assets.

  13. How Many Model Evaluations Are Required To Predict The AEP Of A Wind Power Plant?

    DEFF Research Database (Denmark)

    Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan; Natarajan, Anand;

    2015-01-01

    Wind farm flow models have advanced considerably with the use of large eddy simulations (LES) and Reynolds averaged Navier-Stokes (RANS) computations. The main limitation of these techniques is their high computational time requirements; which makes their use for wind farm annual energy production...

  14. Non-formal techniques for requirements elicitation, modeling, and early assessment for services

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Vyas, Dhaval; Dittmar, A.; Forbig, P.

    2011-01-01

    Designing systems for multiple stakeholders requires frequent collaboration with multiple stakeholders from the start. In many cases at least some stakeholders lack a professional habit of formal modeling. We report observations from two case studies of stakeholder involvement in early design where

  15. Tale of Two Courthouses: A Critique of the Underlying Assumptions in Chronic Disease Self-Management for Aboriginal People

    Directory of Open Access Journals (Sweden)

    Isabelle Ellis

    2009-12-01

    Full Text Available This article reviews the assumptions that underpin thecommonly implemented Chronic Disease Self-Managementmodels. Namely that there are a clear set of instructions forpatients to comply with, that all health care providers agreewith; and that the health care provider and the patient agreewith the chronic disease self-management plan that wasdeveloped as part of a consultation. These assumptions areevaluated for their validity in the remote health care context,particularly for Aboriginal people. These assumptions havebeen found to lack validity in this context, therefore analternative model to enhance chronic disease care isproposed.

  16. Evaluating abundance estimate precision and the assumptions of a count-based index for small mammals

    Science.gov (United States)

    Wiewel, A.S.; Adams, A.A.Y.; Rodda, G.H.

    2009-01-01

    Conservation and management of small mammals requires reliable knowledge of population size. We investigated precision of markrecapture and removal abundance estimates generated from live-trapping and snap-trapping data collected at sites on Guam (n 7), Rota (n 4), Saipan (n 5), and Tinian (n 3), in the Mariana Islands. We also evaluated a common index, captures per unit effort (CPUE), as a predictor of abundance. In addition, we evaluated cost and time associated with implementing live-trapping and snap-trapping and compared species-specific capture rates of selected live- and snap-traps. For all species, markrecapture estimates were consistently more precise than removal estimates based on coefficients of variation and 95 confidence intervals. The predictive utility of CPUE was poor but improved with increasing sampling duration. Nonetheless, modeling of sampling data revealed that underlying assumptions critical to application of an index of abundance, such as constant capture probability across space, time, and individuals, were not met. Although snap-trapping was cheaper and faster than live-trapping, the time difference was negligible when site preparation time was considered. Rattus diardii spp. captures were greatest in Haguruma live-traps (Standard Trading Co., Honolulu, HI) and Victor snap-traps (Woodstream Corporation, Lititz, PA), whereas Suncus murinus and Mus musculus captures were greatest in Sherman live-traps (H. B. Sherman Traps, Inc., Tallahassee, FL) and Museum Special snap-traps (Woodstream Corporation). Although snap-trapping and CPUE may have utility after validation against more rigorous methods, validation should occur across the full range of study conditions. Resources required for this level of validation would likely be better allocated towards implementing rigorous and robust methods.

  17. Symbolic regression via genetic programming for data driven derivation of confinement scaling laws without any assumption on their mathematical form

    Science.gov (United States)

    Murari, A.; Peluso, E.; Gelfusa, M.; Lupelli, I.; Lungaroni, M.; Gaudio, P.

    2015-01-01

    Many measurements are required to control thermonuclear plasmas and to fully exploit them scientifically. In the last years JET has shown the potential to generate about 50 GB of data per shot. These amounts of data require more sophisticated data analysis methodologies to perform correct inference and various techniques have been recently developed in this respect. The present paper covers a new methodology to extract mathematical models directly from the data without any a priori assumption about their expression. The approach, based on symbolic regression via genetic programming, is exemplified using the data of the International Tokamak Physics Activity database for the energy confinement time. The best obtained scaling laws are not in power law form and suggest a revisiting of the extrapolation to ITER. Indeed the best non-power law scalings predict confinement times in ITER approximately between 2 and 3 s. On the other hand, more comprehensive and better databases are required to fully profit from the power of these new methods and to discriminate between the hundreds of thousands of models that they can generate.

  18. Understanding the requirements imposed by programming model middleware on a common communication subsystem.

    Energy Technology Data Exchange (ETDEWEB)

    Buntinas, D.; Gropp, W.

    2005-12-13

    In high-performance parallel computing, most programming-model middleware libraries and runtime systems use a communication subsystem to abstract the lower-level network layer. The functionality required of a communication subsystem depends largely on the programming model implemented by the middleware. In order to maximize performance, middleware libraries and runtime systems typically implement their own communication subsystems that are specially tuned for the middleware, rather than use an existing communication subsystem. This situation leads to duplicated effort and prevents different middleware libraries from being used by the same application in hybrid programming models. In this paper we describe features required by various middleware libraries as well as some desirable features that would make it easier to port a middleware library to the communication subsystem and allow the middleware to make use of high-performance features provided by some networking layers. We show that none of the communication subsystems that we evaluate support all of the features.

  19. Bases, Assumptions, and Results of the Flowsheet Calculations for the Decision Phase Salt Disposition Alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.; Campbell, S.G.

    2001-03-26

    The High Level Waste (HLW) Salt Disposition Systems Engineering Team was formed on March 13, 1998, and chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted wasteform. This requirement arises because the existing In-Tank Precipitation process at the Savannah River Site, as currently configured, cannot simultaneously meet the HLW production and Authorization Basis safety requirements. This engineering study was performed in four phases. This document provides the technical bases, assumptions, and results of this engineering study.

  20. Baseline requirements of the proposed action for the Transportation Management Division routing models

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, P.E.; Joy, D.S.

    1995-02-01

    The potential impacts associated with the transportation of hazardous materials are important to shippers, carriers, and the general public. This is particularly true for shipments of radioactive material. The shippers are primarily concerned with safety, security, efficiency, and equipment requirements. The carriers are concerned with the potential impact that radioactive shipments may have on their operations--particularly if such materials are involved in an accident. The general public has also expressed concerns regarding the safety of transporting radioactive and other hazardous materials through their communities. Because transportation routes are a central concern in hazardous material transport, the prediction of likely routes is the first step toward resolution of these issues. In response to these routing needs, several models have been developed over the past fifteen years at Oak Ridge National Laboratory (ORNL). The HIGHWAY routing model is used to predict routes for truck transportation, the INTERLINE routing model is used to predict both rail and barge routes, and the AIRPORT locator model is used to determine airports with specified criteria near a specific location. As part of the ongoing improvement of the US Department of Energy`s (DOE) Environmental Management Transportation Management Division`s (EM-261) computer systems and development efforts, a Baseline Requirements Assessment Session on the HIGHWAY, INTERLINE, and AIRPORT models was held at ORNL on April 27, 1994. The purpose of this meeting was to discuss the existing capabilities of the models and data bases and to review enhancements of the models and data bases to expand their usefulness. The results of the Baseline Requirements Assessment Section will be discussed in this report. The discussions pertaining to the different models are contained in separate sections.

  1. Modeling the Imprecise Relationship of Goals for Agent-Oriented Requirements Engineering

    Institute of Scientific and Technical Information of China (English)

    SHAOKun; LIUZongtian

    2004-01-01

    Agent concepts have been used in a number of recent approaches of requirement engineering (RE),such as KAOS (Knowledge acquisition in automated specification), i* and GBRAM (Goal-based requirements analysis method). And the modeling languages used in those approaches only permit precise and unambiguous modeling of system properties and behavior. However, some system problems, particularly those drawn from the agentoriented problem domain, may be difficult to model in crisp or precise terms. There are several reasons for this. On one hand, the lack of information may produce the uncertainty of the class to which an object belongs. If we have enough information or if we are considering sufficient attributes,we should be able to make a precise categorization. On the other hand, uncertainty may also arise from some natural imprecision in requirement describing itself, such as soft goal describing and uncertain concepts describing. In the second case, the classification into precise classes may be impossible, not because we do not have enough information, but because the classes themselves are not naturally discrete. In this paper, we start with a discussion of the uncertainty in agent-oriented requirement engineering. Then we propose to handle the uncertainty using fuzzy sets. Finally we refine this proposal to integrate a fuzzy version of Z with the KAOS method. This integration is illustrated on the example of the mine pump. In the conclusion part,we compare the advantages of our approach with those of the classical KAOS approach.

  2. Modeling Requirements for Simulating the Effects of Extreme Acts of Terrorism: A White Paper

    Energy Technology Data Exchange (ETDEWEB)

    Allen, M.; Hiebert-Dodd, K.; Marozas, D.; Paananen, O.; Pryor, R.J.; Reinert, R.K.

    1998-10-01

    This white paper presents the initial requirements for developing a new computer model for simulating the effects of extreme acts of terrorism in the United States. General characteristics of the model are proposed and the level of effort to prepare a complete written description of the model, prior to coding, is detailed. The model would simulate the decision processes and interactions of complex U. S. systems engaged in responding to and recovering from four types of terrorist incidents. The incident scenarios span the space of extreme acts of terrorism that have the potential to affect not only the impacted area, but also the entire nation. The model would be useful to decision-makers in assessing and analyzing the vulnerability of the nation's complex infrastructures, in prioritizing resources to reduce risk, and in planning strategies for immediate response and for subsequent recovery from terrorist incidents.

  3. The sufficiency assumption of the reasoned approach to action

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2015-12-01

    Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.

  4. A "unity assumption" does not promote intersensory integration.

    Science.gov (United States)

    Misceo, Giovanni F; Taylor, Nathanael J

    2011-01-01

    An account of intersensory integration is premised on knowing that different sensory inputs arise from the same object. Could, however, the combination of the inputs be impaired although the "unity assumption" holds? Forty observers viewed a square through a minifying (50%) lens while they simultaneously touched the square. Half could see and half could not see their haptic explorations of the square. Both groups, however, had reason to believe that they were touching and viewing the same square. Subsequent matches of the inspected square were mutually biased by touch and vision when the exploratory movements were visible. However, the matches were biased in the direction of the square's haptic size when observers could not see their exploratory movements. This impaired integration without the visible haptic explorations suggests that the unity assumption alone is not enough to promote intersensory integration.

  5. Singularity free N-body simulations called 'Dynamic Universe Model' don't require dark matter

    Science.gov (United States)

    Naga Parameswara Gupta, Satyavarapu

    For finding trajectories of Pioneer satellite (Anomaly), New Horizons satellite going to Pluto, the Calculations of Dynamic Universe model can be successfully applied. No dark matter is assumed within solar system radius. The effect on the masses around SUN shows as though there is extra gravitation pull toward SUN. It solves the Dynamics of Extra-solar planets like Planet X, satellite like Pioneer and NH for 3-Position, 3-velocity 3-accelaration for their masses, considering the complex situation of Multiple planets, Stars, Galaxy parts and Galaxy centre and other Galaxies Using simple Newtonian Physics. It already solved problems Missing mass in Galaxies observed by galaxy circular velocity curves successfully. Singularity free Newtonian N-body simulations Historically, King Oscar II of Sweden an-nounced a prize to a solution of N-body problem with advice given by Güsta Mittag-Leffler in 1887. He announced `Given a system of arbitrarily many mass points that attract each according to Newton's law, under the assumption that no two points ever collide, try to find a representation of the coordinates of each point as a series in a variable that is some known function of time and for all of whose values the series converges uniformly.'[This is taken from Wikipedia]. The announced dead line that time was1st June 1888. And after that dead line, on 21st January 1889, Great mathematician Poincaré claimed that prize. Later he himself sent a telegram to journal Acta Mathematica to stop printing the special issue after finding the error in his solution. Yet for such a man of science reputation is important than money. [ Ref Book `Celestial mechanics: the waltz of the planets' By Alessandra Celletti, Ettore Perozzi, page 27]. He realized that he has been wrong in his general stability result! But till now nobody could solve that problem or claimed that prize. Later all solutions resulted in singularities and collisions of masses, given by many people

  6. Assumptions and realities of the NCLEX-RN.

    Science.gov (United States)

    Aucoin, Julia W; Treas, Leslie

    2005-01-01

    Every three years the National Council of State Boards of Nursing conducts a practice analysis to verify the activities that are tested on the licensure exam (NCLEX-RN). Faculty can benefit from information in the practice analysis to ensure that courses and experiences adequately prepare graduates for the NCLEX-RN. This summary of the practice analysis challenges common assumptions and provides recommendations for faculty.

  7. The sufficiency assumption of the reasoned approach to action

    OpenAIRE

    David Trafimow

    2015-01-01

    The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables accou...

  8. Footbridge Response Predictions and Their Sensitivity to Stochastic Load Assumptions

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2011-01-01

    Knowledge about footbridges response to actions of walking is important in assessments of vibration serviceability. In a number of design codes for footbridges, the vibration serviceability limit state is assessed using a walking load model in which the walking parameters (step frequency...... of pedestrians for predicting footbridge response, which is meaningful, and a step forward. Modelling walking parameters stochastically, however, requires decisions to be made in terms of their statistical distribution and the parameters describing the statistical distribution. The paper investigates...... the sensitivity of results of computations of bridge response to some of the decisions to be made in this respect. This is a useful approach placing focus on which decisions (and which information) are important for sound estimation of bridge response. The studies involve estimating footbridge responses using...

  9. A new model to predict acute kidney injury requiring renal replacement therapy after cardiac surgery

    Science.gov (United States)

    Pannu, Neesh; Graham, Michelle; Klarenbach, Scott; Meyer, Steven; Kieser, Teresa; Hemmelgarn, Brenda; Ye, Feng; James, Matthew

    2016-01-01

    Background: Acute kidney injury after cardiac surgery is associated with adverse in-hospital and long-term outcomes. Novel risk factors for acute kidney injury have been identified, but it is unknown whether their incorporation into risk models substantially improves prediction of postoperative acute kidney injury requiring renal replacement therapy. Methods: We developed and validated a risk prediction model for acute kidney injury requiring renal replacement therapy within 14 days after cardiac surgery. We used demographic, and preoperative clinical and laboratory data from 2 independent cohorts of adults who underwent cardiac surgery (excluding transplantation) between Jan. 1, 2004, and Mar. 31, 2009. We developed the risk prediction model using multivariable logistic regression and compared it with existing models based on the C statistic, Hosmer–Lemeshow goodness-of-fit test and Net Reclassification Improvement index. Results: We identified 8 independent predictors of acute kidney injury requiring renal replacement therapy in the derivation model (adjusted odds ratio, 95% confidence interval [CI]): congestive heart failure (3.03, 2.00–4.58), Canadian Cardiovascular Society angina class III or higher (1.66, 1.15–2.40), diabetes mellitus (1.61, 1.12–2.31), baseline estimated glomerular filtration rate (0.96, 0.95–0.97), increasing hemoglobin concentration (0.85, 0.77–0.93), proteinuria (1.65, 1.07–2.54), coronary artery bypass graft (CABG) plus valve surgery (v. CABG only, 1.25, 0.64–2.43), other cardiac procedure (v. CABG only, 3.11, 2.12–4.58) and emergent status for surgery booking (4.63, 2.61–8.21). The 8-variable risk prediction model had excellent performance characteristics in the validation cohort (C statistic 0.83, 95% CI 0.79–0.86). The net reclassification improvement with the prediction model was 13.9% (p < 0.001) compared with the best existing risk prediction model (Cleveland Clinic Score). Interpretation: We have developed

  10. Deriving required model structures to predict global wildfire burned area from multiple satellite and climate observations

    Science.gov (United States)

    Forkel, Matthias; Dorigo, Wouter; Lasslop, Gitta; Teubner, Irene; Chuvieco, Emilio; Thonicke, Kirsten

    2017-04-01

    Vegetation fires have important effects on human infrastructures and ecosystems, and affect atmospheric composition and the climate system. Consequently, it is necessary to accurately represent fire dynamics in global vegetation models to realistically represent the role of fires in the Earth system. However, it is unclear which model structures are required in global vegetation/fire models to represent fire activity at regional to global scales. Here we aim to identify required structural components and necessary complexities of global vegetation/fire models to predict spatial-temporal dynamics of burned area. For this purpose, we developed the SOFIA (satellite observations for fire activity) modelling approach to predict burned area from several satellite and climate datasets. A large ensemble of SOFIA models was generated and each model was optimized against observed burned area data. Models that account for a suppression of fire activity at wet conditions result in the highest performances in predicting burned area. Models that include vegetation optical depth data from microwave satellite observations reach higher performances in predicting burned area than models that do not include this dataset. Vegetation optical depth is a proxy for vegetation biomass, density and water content and thus indicates a strong control of vegetation states and dynamics on fire activity. We further compared the best performing SOFIA models with the global process-oriented vegetation/fire model JSBACH-SPITFIRE, and with the GFED and Fire_CCI burned area datasets. SOFIA models outperform JSBACH-SPITFIRE in predicting regional variabilities of burned area. We further applied the best SOFIA model to identify controlling factors for burned area. The results indicate that fire activity is controlled by regionally diverse and complex interactions of human, vegetation and climate factors. Our results demonstrate that the use of multiple observational datasets on climate, hydrological

  11. NGNP: High Temperature Gas-Cooled Reactor Key Definitions, Plant Capabilities, and Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Phillip Mills

    2012-02-01

    This document is intended to provide a Next Generation Nuclear Plant (NGNP) Project tool in which to collect and identify key definitions, plant capabilities, and inputs and assumptions to be used in ongoing efforts related to the licensing and deployment of a high temperature gas-cooled reactor (HTGR). These definitions, capabilities, and assumptions are extracted from a number of sources, including NGNP Project documents such as licensing related white papers [References 1-11] and previously issued requirement documents [References 13-15]. Also included is information agreed upon by the NGNP Regulatory Affairs group's Licensing Working Group and Configuration Council. The NGNP Project approach to licensing an HTGR plant via a combined license (COL) is defined within the referenced white papers and reference [12], and is not duplicated here.

  12. Load assumption for fatigue design of structures and components counting methods, safety aspects, practical application

    CERN Document Server

    Köhler, Michael; Pötter, Kurt; Zenner, Harald

    2017-01-01

    Understanding the fatigue behaviour of structural components under variable load amplitude is an essential prerequisite for safe and reliable light-weight design. For designing and dimensioning, the expected stress (load) is compared with the capacity to withstand loads (fatigue strength). In this process, the safety necessary for each particular application must be ensured. A prerequisite for ensuring the required fatigue strength is a reliable load assumption. The authors describe the transformation of the stress- and load-time functions which have been measured under operational conditions to spectra or matrices with the application of counting methods. The aspects which must be considered for ensuring a reliable load assumption for designing and dimensioning are discussed in detail. Furthermore, the theoretical background for estimating the fatigue life of structural components is explained, and the procedures are discussed for numerous applications in practice. One of the prime intentions of the authors ...

  13. A mathematical model for metabolic tradeoffs, minimal requirements, and evolutionary transitions. (Invited)

    Science.gov (United States)

    Kempes, C.; Hoehler, T. M.; Follows, M. J.; Dutkiewicz, S.

    2013-12-01

    Understanding the minimal energy requirements for life is a difficult challenge because of the great variety of processes required for life. Our approach is to discover general trends applicable to diverse species in order to understand the average constraints faced by life. We then leverage these trends to predict minimal requirements for life. We have focused on broad trends in metabolism, growth, basic bioenergetics, and overall genomic structure and composition. We have developed a simple mathematical model of metabolic partitioning which is able to capture the growth of both single cells and populations of cells for diverse organisms spanning the three domains of life. This model also anticipates the observed interspecific trends in population growth rate and predicts the observed minimum size of a bacterium. Our model connects evolutionary limitations and transitions, including minimal life, to energetic constraints imposed by body architecture and the metabolism of a given species. This model can also be connected to genomic variation across species in order to describe the tradeoffs associated with various genes and their functionality. This forms the basis for a theory of the possibility space for minimal physiological function given evolutionary tradeoffs, general metabolic and biological architecture, and the energetic limitations of the environment.

  14. On data requirements for calibration of integrated models for urban water systems.

    Science.gov (United States)

    Langeveld, Jeroen; Nopens, Ingmar; Schilperoort, Remy; Benedetti, Lorenzo; de Klein, Jeroen; Amerlinck, Youri; Weijers, Stefan

    2013-01-01

    Modeling of integrated urban water systems (IUWS) has seen a rapid development in recent years. Models and software are available that describe the process dynamics in sewers, wastewater treatment plants (WWTPs), receiving water systems as well as at the interfaces between the submodels. Successful applications of integrated modeling are, however, relatively scarce. One of the reasons for this is the lack of high-quality monitoring data with the required spatial and temporal resolution and accuracy to calibrate and validate the integrated models, even though the state of the art of monitoring itself is no longer the limiting factor. This paper discusses the efforts to be able to meet the data requirements associated with integrated modeling and describes the methods applied to validate the monitoring data and to use submodels as software sensor to provide the necessary input for other submodels. The main conclusion of the paper is that state of the art monitoring is in principle sufficient to provide the data necessary to calibrate integrated models, but practical limitations resulting in incomplete data-sets hamper widespread application. In order to overcome these difficulties, redundancy of future monitoring networks should be increased and, at the same time, data handling (including data validation, mining and assimilation) should receive much more attention.

  15. A New Rapid Simplified Model for Urban Rainstorm Inundation with Low Data Requirements

    Directory of Open Access Journals (Sweden)

    Ji Shen

    2016-11-01

    Full Text Available This paper proposes a new rapid simplified inundation model (NRSIM for flood inundation caused by rainstorms in an urban setting that can simulate the urban rainstorm inundation extent and depth in a data-scarce area. Drainage basins delineated from a floodplain map according to the distribution of the inundation sources serve as the calculation cells of NRSIM. To reduce data requirements and computational costs of the model, the internal topography of each calculation cell is simplified to a circular cone, and a mass conservation equation based on a volume spreading algorithm is established to simulate the interior water filling process. Moreover, an improved D8 algorithm is outlined for the simulation of water spilling between different cells. The performance of NRSIM is evaluated by comparing the simulated results with those from a traditional rapid flood spreading model (TRFSM for various resolutions of digital elevation model (DEM data. The results are as follows: (1 given high-resolution DEM data input, the TRFSM model has better performance in terms of precision than NRSIM; (2 the results from TRFSM are seriously affected by the decrease in DEM data resolution, whereas those from NRSIM are not; and (3 NRSIM always requires less computational time than TRFSM. Apparently, compared with the complex hydrodynamic or traditional rapid flood spreading model, NRSIM has much better applicability and cost-efficiency in real-time urban inundation forecasting for data-sparse areas.

  16. Investigation of assumptions underlying current safety guidelines on EM-induced nerve stimulation

    Science.gov (United States)

    Neufeld, Esra; Vogiatzis Oikonomidis, Ioannis; Iacono, Maria Ida; Angelone, Leonardo M.; Kainz, Wolfgang; Kuster, Niels

    2016-06-01

    An intricate network of a variety of nerves is embedded within the complex anatomy of the human body. Although nerves are shielded from unwanted excitation, they can still be stimulated by external electromagnetic sources that induce strongly non-uniform field distributions. Current exposure safety standards designed to limit unwanted nerve stimulation are based on a series of explicit and implicit assumptions and simplifications. This paper demonstrates the applicability of functionalized anatomical phantoms with integrated coupled electromagnetic and neuronal dynamics solvers for investigating the impact of magnetic resonance exposure on nerve excitation within the full complexity of the human anatomy. The impact of neuronal dynamics models, temperature and local hot-spots, nerve trajectory and potential smoothing, anatomical inhomogeneity, and pulse duration on nerve stimulation was evaluated. As a result, multiple assumptions underlying current safety standards are questioned. It is demonstrated that coupled EM-neuronal dynamics modeling involving realistic anatomies is valuable to establish conservative safety criteria.

  17. Risk Pooling, Commitment and Information: An experimental test of two fundamental assumptions

    OpenAIRE

    Abigail Barr

    2003-01-01

    This paper presents rigorous and direct tests of two assumptions relating to limited commitment and asymmetric information that current underpin current models of risk pooling. A specially designed economic experiment involving 678 subjects across 23 Zimbabwean villages is used to solve the problems of observability and quantification that have frustrated previous attempts to conduct such tests. I find that more extrinsic commitment is associated with more risk pooling, but that more informat...

  18. Modeling of Car-Following Required Safe Distance Based on Molecular Dynamics

    Directory of Open Access Journals (Sweden)

    Dayi Qu

    2014-01-01

    Full Text Available In car-following procedure, some distances are reserved between the vehicles, through which drivers can avoid collisions with vehicles before and after them in the same lane and keep a reasonable clearance with lateral vehicles. This paper investigates characters of vehicle operating safety in car following state based on required safe distance. To tackle this problem, we probe into required safe distance and car-following model using molecular dynamics, covering longitudinal and lateral safe distance. The model was developed and implemented to describe the relationship between longitudinal safe distance and lateral safe distance under the condition where the leader keeps uniform deceleration. The results obtained herein are deemed valuable for car-following theory and microscopic traffic simulation.

  19. Required levels of catalysis for emergence of autocatalytic sets in models of chemical reaction systems.

    Science.gov (United States)

    Hordijk, Wim; Kauffman, Stuart A; Steel, Mike

    2011-01-01

    The formation of a self-sustaining autocatalytic chemical network is a necessary but not sufficient condition for the origin of life. The question of whether such a network could form "by chance" within a sufficiently complex suite of molecules and reactions is one that we have investigated for a simple chemical reaction model based on polymer ligation and cleavage. In this paper, we extend this work in several further directions. In particular, we investigate in more detail the levels of catalysis required for a self-sustaining autocatalytic network to form. We study the size of chemical networks within which we might expect to find such an autocatalytic subset, and we extend the theoretical and computational analyses to models in which catalysis requires template matching.

  20. Required Levels of Catalysis for Emergence of Autocatalytic Sets in Models of Chemical Reaction Systems

    Directory of Open Access Journals (Sweden)

    Wim Hordijk

    2011-05-01

    Full Text Available The formation of a self-sustaining autocatalytic chemical network is a necessary but not sufficient condition for the origin of life. The question of whether such a network could form “by chance” within a sufficiently complex suite of molecules and reactions is one that we have investigated for a simple chemical reaction model based on polymer ligation and cleavage. In this paper, we extend this work in several further directions. In particular, we investigate in more detail the levels of catalysis required for a self-sustaining autocatalytic network to form. We study the size of chemical networks within which we might expect to find such an autocatalytic subset, and we extend the theoretical and computational analyses to models in which catalysis requires template matching.

  1. Modeling and verifying Web services driven by requirements: An ontology-based approach

    Institute of Scientific and Technical Information of China (English)

    HOU Lishan; JIN ZHi; WU Budan

    2006-01-01

    Automatic discovery and composition of Web services is an important research area in Web service technology, in which the specification of Web services is a key issue. This paper presents a Web service capability description framework based on the environment ontology. This framework depicts Web services capability in two aspects:the operable environment and the environment changes resulting from behaviors of the Web service. On the basis of the framework, a requirement-driven Web service composition model has been constructed. This paper brings forward the formalization of Web service interactions with π calculus. And an automatic mechanism converting conceptual capability description to the formal process expression has been built. This kind of formal specification assists in verifying whether the composite Web service model matches the requirement.

  2. Dependencies among Architectural Views Got from Software Requirements Based on a Formal Model

    Directory of Open Access Journals (Sweden)

    Osis Janis

    2014-12-01

    Full Text Available A system architect has software requirements and some unspecified knowledge about a problem domain (e.g., an enterprise as source information for assessment and evaluation of possible solutions and getting the target point, a preliminary software design. The solving factor is architect’s experience and expertise in the problem domain (“AS-IS”. A proposed approach is dedicated to assist a system architect in making an appropriate decision on the solution (“TO-BE”. It is based on a formal mathematical model, Topological Functioning Model (TFM. Compliant TFMs can be transformed into software architectural views. The paper demonstrates and discusses tracing dependency links from the requirements to and between the architectural views.

  3. 42 CFR 476.74 - General requirements for the assumption of review.

    Science.gov (United States)

    2010-10-01

    ... HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS UTILIZATION AND QUALITY CONTROL REVIEW Review Responsibilities of Utilization and Quality Control Quality Improvement Organizations (QIOs... inspection at its principal business office— (1) A copy of each agreement with Medicare fiscal...

  4. Military Readiness: Navy Needs to Reassess Its Metrics and Assumptions for Ship Crewing Requirements and Training

    Science.gov (United States)

    2010-06-01

    best practices. For example, beginning in October 2001, it conducted pilot tests to reduce the at-sea workload on the guided- missile cruiser USS ...Mobile Bay and guided-missile destroyer USS Milius. These tests consisted of job task analysis and work studies to verify that proposed reductions to...Chairman The Honorable Thad Cochran Ranking Member Subcommittee on Defense Committee on Appropriations United States Senate The Honorable

  5. Expressing Environment Assumptions and Real-time Requirements for a Distributed Embedded System with Shared Variables

    DEFF Research Database (Denmark)

    Tjell, Simon; Fernandes, João Miguel

    2008-01-01

    In a distributed embedded system, it is often necessary to share variables among its computing nodes to allow the distribution of control algorithms. It is therefore necessary to include a component in each node that provides the service of variable sharing. For that type of component, this paper...

  6. A case study in modeling company policy documents as a source of requirements

    Energy Technology Data Exchange (ETDEWEB)

    CRUMPTON,KATHLEEN MARIE; GONZALES,REGINA M.; TRAUTH,SHARON L.

    2000-04-11

    This paper describes an approach that was developed to produce structured models that graphically reflect the requirements contained within a text document. The document used in this research is a draft policy document governing business in a research and development environment. In this paper, the authors present a basic understanding of why this approach is needed, the techniques developed, lessons learned during modeling and analysis, and recommendations for future investigation. The modeling method applied on the policy document was developed as an extension to entity relationship (ER) diagrams, which built in some structural information typically associated with object-oriented techniques. This approach afforded some structure as an analysis tool, while remaining flexible enough to be used with the text document. It provided a visual representation that allowed further analysis and layering of the model to be done.

  7. Expansion of the Kano model to identify relevant customer segments and functional requirements

    DEFF Research Database (Denmark)

    Atlason, Reynir Smari; Stefansson, Arnaldur Smari; Wietz, Miriam

    2017-01-01

    or a service. A current limitation of the Kano model is that it does not allow developers to visualise which combined sets of FRs would provide the highest satisfaction between different customer segments. In this paper, a stepwise method to address this particular shortcoming is presented. First......The Kano model of customer satisfaction has been widely used to analyse perceived needs of customers. The model provides product developers valuable information about if, and then how much a given functional requirement (FR) will impact customer satisfaction if implemented within a product, system...... are identified. At last, the functions of the chosen segments with the smallest interval, define the FRs appealing to the biggest target group. The proposed extension to the model should assist product developers within various fields to more effectively evaluate which FRs should be implemented when considering...

  8. Requirements for tolerances in a CAM-I generalized, solid geometric modeling system

    Energy Technology Data Exchange (ETDEWEB)

    Easterday, R.J.

    1980-01-01

    For a geometric modeling system to support computer-assisted manufacturing, it is necessary that dimensioning and tolerancing information be available in computer-readable form. The requirements of a tolerancing scheme within a geometric modeling system are discussed; they include structure sufficient to characterize the tolerance specifications currently in use by industry, means to associate tolerance structures to the boundary representation, means to create and edit information in the tolerance structures, means to extract information from the data base, and functions to check for completeness and validity of the tolerances. 1 figure, 8 tables. (RWR)

  9. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  10. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Science.gov (United States)

    Hsu, Anne; Griffiths, Thomas L

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  11. Bayesian Mass Estimates of the Milky Way II: The dark and light sides of parameter assumptions

    CERN Document Server

    Eadie, Gwendolyn M

    2016-01-01

    We present mass and mass profile estimates for the Milky Way Galaxy using the Bayesian analysis developed by Eadie et al (2015b) and using globular clusters (GCs) as tracers of the Galactic potential. The dark matter and GCs are assumed to follow different spatial distributions; we assume power-law model profiles and use the model distribution functions described in Evans et al. (1997); Deason et al (2011, 2012a). We explore the relationships between assumptions about model parameters and how these assumptions affect mass profile estimates. We also explore how using subsamples of the GC population beyond certain radii affect mass estimates. After exploring the posterior distributions of different parameter assumption scenarios, we conclude that a conservative estimate of the Galaxy's mass within 125kpc is $5.22\\times10^{11} M_{\\odot}$, with a $50\\%$ probability region of $(4.79, 5.63) \\times10^{11} M_{\\odot}$. Extrapolating out to the virial radius, we obtain a virial mass for the Milky Way of $6.82\\times10^{...

  12. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  13. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    Science.gov (United States)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  14. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  15. Modelo de requisitos para sistemas embebidos: Model of requirements for embedded systems

    Directory of Open Access Journals (Sweden)

    Liliana González Palacio

    2008-07-01

    Full Text Available En este artículo se presenta un modelo de requisitos como apoyo para la construcción de sistemas embebidos. En la actualidad, las metodologías de Ingeniería de Requisitos propuestas para este dominio no establecen continuidad en su proceso de desarrollo, ya que poseen una fuerte orientación a la etapa de diseño y un énfasis más débil en la etapa de análisis. Además, dichas metodologías ofrecen pautas para tratar los requisitos luego de que han sido obtenidos, pero no proponen herramientas; como por ejemplo, un modelo de requisitos, para la obtención de estos. Este trabajo hace parte de un proyecto de investigación que tiene como objetivo proponer una metodología de Ingeniería de Requisitos (IR para el análisis de Sistemas Embebidos (SE. El modelo de requisitos propuesto y su forma de utilización se ilustran mediante un caso de aplicación consistente en la obtención de requisitos para un sistema de sensado de movimiento, embebido en un sistema de alarma para hogar.In this paper a model of requirements for supporting the construction of embedded systems is presented. Currently, the methodologies of Engineering of Requirements, in this field, do not let continuity in their development process, since they have a strong orientation to design stage and a weaker emphasis on the analysis stage. Furthermore, such methodologies provide guidelines for treating requirements after being obtained. However, they do not propose tools such as a model of requirements for obtaining them. This paper is the result of a research project which objective is to propose engineering of requirements methodology for embedded systems analysis. The model of proposed requirements and its use are illustrated through an application case consisting on obtaining requirements for a movement sensing system, embedded in a home alarm system.

  16. High resolution weather data for urban hydrological modelling and impact assessment, ICT requirements and future challenges

    Science.gov (United States)

    ten Veldhuis, Marie-claire; van Riemsdijk, Birna

    2013-04-01

    Hydrological analysis of urban catchments requires high resolution rainfall and catchment information because of the small size of these catchments, high spatial variability of the urban fabric, fast runoff processes and related short response times. Rainfall information available from traditional radar and rain gauge networks does no not meet the relevant scales of urban hydrology. A new type of weather radars, based on X-band frequency and equipped with Doppler and dual polarimetry capabilities, promises to provide more accurate rainfall estimates at the spatial and temporal scales that are required for urban hydrological analysis. Recently, the RAINGAIN project was started to analyse the applicability of this new type of radars in the context of urban hydrological modelling. In this project, meteorologists and hydrologists work closely together in several stages of urban hydrological analysis: from the acquisition procedure of novel and high-end radar products to data acquisition and processing, rainfall data retrieval, hydrological event analysis and forecasting. The project comprises of four pilot locations with various characteristics of weather radar equipment, ground stations, urban hydrological systems, modelling approaches and requirements. Access to data processing and modelling software is handled in different ways in the pilots, depending on ownership and user context. Sharing of data and software among pilots and with the outside world is an ongoing topic of discussion. The availability of high resolution weather data augments requirements with respect to the resolution of hydrological models and input data. This has led to the development of fully distributed hydrological models, the implementation of which remains limited by the unavailability of hydrological input data. On the other hand, if models are to be used in flood forecasting, hydrological models need to be computationally efficient to enable fast responses to extreme event conditions. This

  17. DEPENDABLE PRIVACY REQUIREMENTS BY AGILE MODELED LAYERED SECURITY ARCHITECTURES – WEB SERVICES CASE STUDY

    Directory of Open Access Journals (Sweden)

    M.Upendra Kumar

    2011-07-01

    Full Text Available Software Engineering covers the definition of processes, techniques and models suitable for its environment to guarantee quality of results. An important design artifact in any software development project is the Software Architecture. Software Architecture’s important part is the set of architectural design rules. A primary goal of the architecture is to capture the architecture design decisions. An important part of these design decisions consists of architectural design rules In an MDA (Model-Driven Architecture context, the design of the system architecture is captured in the models of the system. MDA is known to be layered approach for modeling the architectural design rules and uses design patterns to improve the quality of software system. And to include the security to the software system, security patterns are introduced that offer security at the architectural level. More over, agile software development methods are used to build secure systems. There are different methods defined in agile development as extreme programming (XP, scrum, feature driven development (FDD, test driven development (TDD, etc. Agile processing is includes the phases as agile analysis, agile design and agile testing. These phases are defined in layers of MDA to provide security at the modeling level which ensures that security at the system architecture stage will improve the requirements for that system. Agile modeled Layered Security Architectures increase the dependability of the architecture in terms of privacy requirements. We validate this with a case study of dependability of privacy of Web Services Security Architectures, which helps for secure service oriented security architecture. In this paper the major part is given to model architectural design rules using MDA so that architects and developers are responsible to automatic enforcement on the detailed design and easy to understand and use by both of them. This MDA approach is implemented in use of

  18. A model predicting fluindione dose requirement in elderly inpatients including genotypes, body weight, and amiodarone.

    Science.gov (United States)

    Moreau, Caroline; Pautas, Eric; Duverlie, Charlotte; Berndt, Celia; Andro, Marion; Mahé, Isabelle; Emmerich, Joseph; Lacut, Karine; Le Gal, Grégoire; Peyron, Isabelle; Gouin-Thibault, Isabelle; Golmard, Jean-Louis; Loriot, Marie-Anne; Siguret, Virginie

    2014-04-01

    Indandione VKAs have been widely used for decades, especially in Eastern Europe and France. Contrary to coumarin VKAs, the relative contribution of individual factors to the indandione-VKA response is poorly known. In the present multicentre study, we sought to develop and validate a model including genetic and non-genetic factors to predict the daily fluindione dose requirement in elderly patients in whom VKA dosing is challenging. We prospectively recorded clinical and therapeutic data in 230 Caucasian inpatients mean aged 85 ± 6 years, who had reached international normalized ratio stabilisation (range 2.0-3.0) on fluindione. In the derivation cohort (n=156), we analysed 13 polymorphisms in seven genes potentially involved in the pharmacological effect or vitamin-K cycle (VKORC1, CYP4F2, EPHX1) and fluindione metabolism/transport (CYP2C9, CYP2C19, CYP3A5, ABCB1). We built a regression model incorporating non-genetic and genetic data and evaluated the model performances in a separate cohort (n=74).Body-weight, amiodarone intake, VKORC1, CYP4F2, ABCB1 genotypes were retained in the final model, accounting for 31.5% of dose variability. None influence of CYP2C9 was observed. Our final model showed good performances: in 83.3% of the validation cohort patients, the dose was accurately predicted within 5 mg, i.e.the usual step used for adjusting fluindione dosage. In conclusion, in addition to body-weight and amiodarone-intake, pharmacogenetic factors (VKORC1, CYP4F2, ABCB1) related to the pharmacodynamic effect and transport of fluindione significantly influenced the dose requirement in elderly patients while CYP2C9 did not. Studies are required to know whether fluindione could be an alternative VKA in carriers of polymorphic CYP2C9 alleles, hypersensitive to coumarins.

  19. The Metaphysics of {D-CTCs}: On the Underlying Assumptions of {Deutsch}'s Quantum Solution to the Paradoxes of Time Travel

    CERN Document Server

    Dunlap, Lucas

    2015-01-01

    I argue that Deutsch's model for the behavior of systems traveling around closed timelike curves (CTCs) relies implicitly on a substantive metaphysical assumption. Deutsch is employing a version of quantum theory with a significantly supplemented ontology of parallel existent worlds, which differ in kind from the many worlds of the Everett interpretation. Standard Everett does not support the existence of multiple identical copies of the world, which the D-CTC model requires. This has been obscured because he often refers to the branching structure of Everett as a "multiverse'', and describes quantum interference by reference to parallel interacting definite worlds. But he admits that this is only an approximation to Everett. The D-CTC model, however, relies crucially on the existence of a multiverse of parallel interacting worlds. Since his model is supplemented by structures that go significantly beyond quantum theory, and play an ineliminable role in its predictions and explanations, it does not represent ...

  20. THE COMPLEX OF ASSUMPTION CATHEDRAL OF THE ASTRAKHAN KREMLIN

    Directory of Open Access Journals (Sweden)

    Savenkova Aleksandra Igorevna

    2016-08-01

    Full Text Available This article is devoted to an architectural and historical analysis of the constructions forming a complex of Assumption Cathedral of the Astrakhan Kremlin, which earlier hasn’t been considered as a subject of special research. Basing on the archival sources, photographic materials, publications and on-site investigations of monuments, the creation history of the complete architectural complex sustained in one style of the Muscovite baroque, unique in its composite construction, is considered. Its interpretation in the all-Russian architectural context is offered. Typological features of single constructions come to light. The typology of the Prechistinsky bell tower has an untypical architectural solution - “hexagonal structure on octagonal and quadrangular structures”. The way of connecting the building of the Cathedral and the chambers by the passage was characteristic of monastic constructions and was exclusively seldom in kremlins, farmsteads and ensembles of city cathedrals. The composite scheme of the Assumption Cathedral includes the Lobnoye Mesto (“the Place of Execution” located on an axis from the West, it is connected with the main building by a quarter-turn with landing. The only prototype of the structure is a Lobnoye Mesto on the Red Square in Moscow. In the article the version about the emergence of the Place of Execution on the basis of earlier existing construction - a tower “the Peal” which is repeatedly mentioned in written sources in connection with S. Razin’s revolt is considered. The metropolitan Sampson, trying to keep the value of the Astrakhan metropolitanate, builds the Assumption Cathedral and the Place of Execution directly appealing to a capital prototype to emphasize the continuity and close connection with Moscow.

  1. AN EFFICIENT BIT COMMITMENT SCHEME BASED ON FACTORING ASSUMPTION

    Institute of Scientific and Technical Information of China (English)

    Zhong Ming; Yang Yixian

    2001-01-01

    Recently, many bit commitment schemes have been presented. This paper presents a new practical bit commitment scheme based on Schnorr's one-time knowledge proof scheme,where the use of cut-and-choose method and many random exam candidates in the protocols are replaced by a single challenge number. Therefore the proposed bit commitment scheme is more efficient and practical than the previous schemes In addition, the security of the proposed scheme under factoring assumption is proved, thus the cryptographic basis of the proposed scheme is clarified.

  2. Radiation hormesis and the linear-no-threshold assumption

    CERN Document Server

    Sanders, Charles L

    2009-01-01

    Current radiation protection standards are based upon the application of the linear no-threshold (LNT) assumption, which considers that even very low doses of ionizing radiation can cause cancer. The radiation hormesis hypothesis, by contrast, proposes that low-dose ionizing radiation is beneficial. In this book, the author examines all facets of radiation hormesis in detail, including the history of the concept and mechanisms, and presents comprehensive, up-to-date reviews for major cancer types. It is explained how low-dose radiation can in fact decrease all-cause and all-cancer mortality an

  3. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    articles across various research disciplines. We find and classify a stock of 107 relevant articles into four scientific discourses: the normative, the interpretive, the critical, and the dialogical discourses, as formulated by Deetz (1996). We find that the normative discourse dominates the IT PPM......, metaphors, information systems....... literature, and few contributions represent the three remaining discourses, which unjustifiably leaves out issues that research could and most probably should investigate. In order to highlight research potentials, limitations, and underlying assumptions of each discourse, we develop four IT PPM metaphors...

  4. Model requirements for estimating and reporting soil C stock changes in national greenhouse gas inventories

    Science.gov (United States)

    Didion, Markus; Blujdea, Viorel; Grassi, Giacomo; Hernández, Laura; Jandl, Robert; Kriiska, Kaie; Lehtonen, Aleksi; Saint-André, Laurent

    2016-04-01

    Globally, soils are the largest terrestrial store of carbon (C) and small changes may contribute significantly to the global C balance. Due to the potential implications for climate change, accurate and consistent estimates of C fluxes at the large-scale are important as recognized, for example, in international agreements such as the United Nations Framework Convention on Climate Change (UNFCCC). Under the UNFCCC and also under the Kyoto Protocol it is required to report C balances annually. Most measurement-based soil inventories are currently not able to detect annual changes in soil C stocks consistently across space and representative at national scales. The use of models to obtain relevant estimates is considered an appropriate alternative under the UNFCCC and the Kyoto Protocol. Several soil carbon models have been developed but few models are suitable for a consistent application across larger-scales. Consistency is often limited by the lack of input data for models, which can result in biased estimates and, thus, the reporting criteria of accuracy (i.e., emission and removal estimates are systematically neither over nor under true emissions or removals) may be met. Based on a qualitative assessment of the ability to meet criteria established for GHG reporting under the UNFCCC including accuracy, consistency, comparability, completeness, and transparency, we identified the suitability of commonly used simulation models for estimating annual C stock changes in mineral soil in European forests. Among six discussed simulation models we found a clear trend toward models for providing quantitative precise site-specific estimates which may lead to biased estimates across space. To meet reporting needs for national GHG inventories, we conclude that there is a need for models producing qualitative realistic results in a transparent and comparable manner. Based on the application of one model along a gradient from Boreal forests in Finland to Mediterranean forests

  5. The metaphysics of D-CTCs: On the underlying assumptions of Deutsch's quantum solution to the paradoxes of time travel

    Science.gov (United States)

    Dunlap, Lucas

    2016-11-01

    I argue that Deutsch's model for the behavior of systems traveling around closed timelike curves (CTCs) relies implicitly on a substantive metaphysical assumption. Deutsch is employing a version of quantum theory with a significantly supplemented ontology of parallel existent worlds, which differ in kind from the many worlds of the Everett interpretation. Standard Everett does not support the existence of multiple identical copies of the world, which the D-CTC model requires. This has been obscured because he often refers to the branching structure of Everett as a "multiverse", and describes quantum interference by reference to parallel interacting definite worlds. But he admits that this is only an approximation to Everett. The D-CTC model, however, relies crucially on the existence of a multiverse of parallel interacting worlds. Since his model is supplemented by structures that go significantly beyond quantum theory, and play an ineliminable role in its predictions and explanations, it does not represent a quantum solution to the paradoxes of time travel.

  6. Requirements on catchment modelling for an optimized reservoir operation in water deficient regions

    Science.gov (United States)

    Froebrich, J.; Kirkby, M. J.; Reder, C.

    2002-12-01

    To provide long term water security in water deficient regions, the interaction of erosion, pollutant emission, the impact of irrigation areas, the characteristics of ephemeral streams and resulting water quality in reservoirs must be considered in water management plans. In many semiarid regions, reservoirs are the only source of water, the indispensable element required for human existence. By the year 2000 the world had built many small and > 45,000 large dams. In these reservoirs, water quality and quantity are affected both by climate change and catchment land use. Results of past projects indicate that the specific control of reservoirs can lead to a significant improvement of water quality, but reservoirs have already transformed the quantity and quality of surface waters in a remarkable manner. Reservoirs with their distinct behaviour as reactors could therefore be considered as key elements in semiarid and arid catchments, linking and transforming rivers and channels. Effective practical operation schemes require a thorough knowledge of spatial and temporal variation in water quality and quantity, and simulation models can be used to support the identification of most effective management potentials at catchment scale. We discuss here the particular requirements for water quality modelling at catchment scale in semiarid and arid regions. Results of reservoir water quality modelling are presented. The potential of catchment models like the PESERA model is demonstrated. Knowledge gaps, such as the consideration of ephemeral streams in catchment models, are addressed and fresh problem solving strategies are introduced. Erosion models like PESERA can provide important information on sediment transport and hence describing the carrier potentials for organic matter, heavy metals and pesticides from terrestrial areas into the water courses. The new EU-research project tempQsim will improve understanding of how the organic matter is transformed in river beds

  7. A Prognostic Model for One-year Mortality in Patients Requiring Prolonged Mechanical Ventilation

    Science.gov (United States)

    Carson, Shannon S.; Garrett, Joanne; Hanson, Laura C.; Lanier, Joyce; Govert, Joe; Brake, Mary C.; Landucci, Dante L.; Cox, Christopher E.; Carey, Timothy S.

    2009-01-01

    Objective A measure that identifies patients who are at high risk of mortality after prolonged ventilation will help physicians communicate prognosis to patients or surrogate decision-makers. Our objective was to develop and validate a prognostic model for 1-year mortality in patients ventilated for 21 days or more. Design Prospective cohort study. Setting University-based tertiary care hospital Patients 300 consecutive medical, surgical, and trauma patients requiring mechanical ventilation for at least 21 days were prospectively enrolled. Measurements and Main Results Predictive variables were measured on day 21 of ventilation for the first 200 patients and entered into logistic regression models with 1-year and 3-month mortality as outcomes. Final models were validated using data from 100 subsequent patients. One-year mortality was 51% in the development set and 58% in the validation set. Independent predictors of mortality included requirement for vasopressors, hemodialysis, platelet count ≤150 ×109/L, and age ≥50. Areas under the ROC curve for the development model and validation model were 0.82 (se 0.03) and 0.82 (se 0.05) respectively. The model had sensitivity of 0.42 (se 0.12) and specificity of 0.99 (se 0.01) for identifying patients who had ≥90% risk of death at 1 year. Observed mortality was highly consistent with both 3- and 12-month predicted mortality. These four predictive variables can be used in a simple prognostic score that clearly identifies low risk patients (no risk factors, 15% mortality) and high risk patients (3 or 4 risk factors, 97% mortality). Conclusions Simple clinical variables measured on day 21 of mechanical ventilation can identify patients at highest and lowest risk of death from prolonged ventilation. PMID:18552692

  8. Evaluation of olive flowering at low latitude sites in Argentina using a chilling requirement model

    Energy Technology Data Exchange (ETDEWEB)

    Aybar, V.E.; Melo-Abreu, J.P. de; Searles, P.S.; Matias, A.G.; Del Rio, C.; Caballero, C. M.; Rousseaux, M.C.

    2015-07-01

    Olive production has expanded significantly from the Mediterranean Basin into the New World over the last two decades. In some cases, cultivars of European origin have been introduced at a large commercial scale with little previous evaluation of potential productivity. The objective of this study was to evaluate whether a temperature-driven simulation model developed in the Mediterranean Basin to predict normal flowering occurrence and flowering date using cultivar-specific thermal requirements was suitable for the low latitude areas of Northwest Argentina. The model was validated at eight sites over several years and a wide elevation range (350–1200 m above mean sea level) for three cultivars (‘Arbequina’, ‘Frantoio’, ‘Leccino’) with potentially different chilling requirements. In ‘Arbequina’, normal flowering was observed at almost all sites and in all years, while normal flowering events in ‘Frantoio’ and ‘Leccino’ were uncommon. The model successfully predicted if flowering would be normal in 92% and 83% of the cases in ‘Arbequina’ and ‘Frantoio’, respectively, but was somewhat less successful in ‘Leccino’ (61%). When flowering occurred, the predicted flowering date was within ± 7 days of the observed date in 71% of the cases. Overall, the model results indicate that cultivar-specific simulation models may be used as an approximate tool to predict whether individual cultivars will be successful in new growing areas. In Northwest Argentina, the model could be used to identify cultivars to replace ‘Frantoio’ and ‘Leccino’ and to simulate global warming scenarios. (Author)

  9. Use of anchoring vignettes to evaluate health reporting behavior amongst adults aged 50 years and above in Africa and Asia – testing assumptions

    Directory of Open Access Journals (Sweden)

    Siddhivinayak Hirve

    2013-09-01

    Full Text Available Background: Comparing self-rating health responses across individuals and cultures is misleading due to different reporting behaviors. Anchoring vignettes is a technique that allows identifying and adjusting self-rating responses for reporting heterogeneity (RH. Objective: This article aims to test two crucial assumptions of vignette equivalence (VE and response consistency (RC that are required to be met before vignettes can be used to adjust self-rating responses for RH. Design: We used self-ratings, vignettes, and objective measures covering domains of mobility and cognition from the WHO study on global AGEing and adult health, administered to older adults aged 50 years and above from eight low- and middle-income countries in Africa and Asia. For VE, we specified a hierarchical ordered probit (HOPIT model to test for equality of perceived vignette locations. For RC, we tested for equality of thresholds that are used to rate vignettes with thresholds derived from objective measures and used to rate their own health function. Results: There was evidence of RH in self-rating responses for difficulty in mobility and cognition. Assumptions of VE and RC between countries were violated driven by age, sex, and education. However, within a country context, assumption of VE was met in some countries (mainly in Africa, except Tanzania and violated in others (mainly in Asia, except India. Conclusion: We conclude that violation of assumptions of RC and VE precluded the use of anchoring vignettes to adjust self-rated responses for RH across countries in Asia and Africa.

  10. Application and project portfolio valuation using enterprise architecture and business requirements modelling

    Science.gov (United States)

    Quartel, Dick; Steen, Maarten W. A.; Lankhorst, Marc M.

    2012-05-01

    This article describes an architecture-based approach to IT valuation. This approach offers organisations an instrument to valuate their application and project portfolios and to make well-balanced decisions about IT investments. The value of a software application is assessed in terms of its contribution to a selection of business goals. Based on such assessments, the value of different applications can be compared, and requirements for innovation, development, maintenance and phasing out can be identified. IT projects are proposed to realise the requirements. The value of each project is assessed in terms of the value it adds to one or more applications. This value can be obtained by relating the 'as-is' application portfolio to the 'to-be' portfolio that is being proposed by the project portfolio. In this way, projects can be ranked according to their added value, given a certain selection of business goals. The approach uses ArchiMate to model the relationship between software applications, business processes, services and products. In addition, two language extensions are used to model the relationship of these elements to business goals and requirements and to projects and project portfolios. The approach is illustrated using the portfolio method of Bedell and has been implemented in BiZZdesign Architect.

  11. Modelling regional variability of irrigation requirements due to climate change in Northern Germany.

    Science.gov (United States)

    Riediger, Jan; Breckling, Broder; Svoboda, Nikolai; Schröder, Winfried

    2016-01-15

    The question whether global climate change invalidates the efficiency of established land use practice cannot be answered without systemic considerations on a region specific basis. In this context plant water availability and irrigation requirements, respectively, were investigated in Northern Germany. The regions under investigation--Diepholz, Uelzen, Fläming and Oder-Spree--represent a climatic gradient with increasing continentality from West to East. Besides regional climatic variation and climate change, soil conditions and crop management differ on the regional scale. In the model regions, temporal seasonal droughts influence crop success already today, but on different levels of intensity depending mainly on climate conditions. By linking soil water holding capacities, crop management data and calculations of evapotranspiration and precipitation from the climate change scenario RCP 8.5 irrigation requirements for maintaining crop productivity were estimated for the years 1991 to 2070. Results suggest that water requirement for crop irrigation is likely to increase with considerable regional variation. For some of the regions, irrigation requirements might increase to such an extent that the established regional agricultural practice might be hard to retain. Where water availability is limited, agricultural practice, like management and cultivated crop spectrum, has to be changed to deal with the new challenges.

  12. MODSARE-V: Validation of Dependability and Safety Critical Software Components with Model Based Requirements

    Science.gov (United States)

    Silveira, Daniel T. de M. M.; Schoofs, Tobias; Alana Salazar, Elena; Rodriguez Rodriguez, Ana Isabel; Devic, Marie-Odile

    2010-08-01

    The wide use of RAMS methods and techniques [1] (e.g. SFMECA, SFTA, HAZOP, HA...) in critical software development resulted in the specification of new software requirements, design constraints and other issues such as mandatory coding rules. Given the large variety of RAMS Requirements and Techniques, different types of Verification and Validation (V&V) [14] are spread over the phases of the software engineering process. As a result, the V&V process becomes complex and the cost and time required for a complete and consistent V&V process is increased. By introducing the concept of a model based approach to facilitate the RAMS requirements definition process, the V&V may be reduce in time and effort. MODSARE-V is demonstrates the feasibility of this concept based on case studies applied to ground or on-board software space projects with critical functions/components. This paper describes the approach adopted at MODSARE-V to realize the concept into a prototype and summarizes the results and conclusions met after the prototype application on the case studies.

  13. Modeling Crop Water Requirement at Regional Scales in the Context of Integrated Hydrology

    Science.gov (United States)

    Dogrul, E. C.; Kadir, T.; Brush, C. F.; Chung, F. I.

    2009-12-01

    In developed watersheds, the stresses on surface and subsurface water resources are generally created by groundwater pumping and stream flow diversions to satisfy agricultural and urban water requirements. The application of pumping and diversion to meet these requirements also affects the surface and subsurface water system through recharge of the aquifer and surface runoff back into the streams. The agricultural crop water requirement is a function of climate, soil and land surface physical properties as well as land use management practices which are spatially distributed and evolve in time. In almost all modeling studies pumping and diversions are specified as predefined stresses and are not included in the simulation as an integral and dynamic component of the hydrologic cycle that depend on other hydrologic components. To address this issue, California Department of Water Resources has been developing a new root zone module that can either be used as a stand-alone modeling tool or can be linked to other stream and aquifer modeling tools. The tool, named Integrated Water Flow Model Demand Calculator (IDC), computes crop water requirements under user-specified climatic, land-use and irrigation management settings at regional scales, and routes the precipitation and irrigation water through the root zone using physically-based methods. In calculating the crop water requirement, IDC uses an irrigation-scheduling type approach where irrigation is triggered when the soil moisture falls below a user-specified level. Water demands for managed wetlands, urban areas, and agricultural crops including rice, can either be computed by IDC or specified by the user depending on the requirements and available data for the modeling project. For areas covered with native vegetation water demand is not computed and only precipitation is routed through the root zone. Many irrigational practices such as irrigation for leaching, re-use of irrigation return flow, flooding and

  14. The contour method cutting assumption: error minimization and correction

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  15. On the role of assumptions in cladistic biogeographical analyses

    Directory of Open Access Journals (Sweden)

    Charles Morphy Dias dos Santos

    2011-01-01

    Full Text Available The biogeographical Assumptions 0, 1, and 2 (respectively A0, A1 and A2 are theoretical terms used to interpret and resolve incongruence in order to find general areagrams. The aim of this paper is to suggest the use of A2 instead of A0 and A1 in solving uncertainties during cladistic biogeographical analyses. In a theoretical example, using Component Analysis and Primary Brooks Parsimony Analysis (primary BPA, A2 allows for the reconstruction of the true sequence of disjunction events within a hypothetical scenario, while A0 adds spurious area relationships. A0, A1 and A2 are interpretations of the relationships between areas, not between taxa. Since area relationships are not equivalent to cladistic relationships, it is inappropriate to use the distributional information of taxa to resolve ambiguous patterns in areagrams, as A0 does. Although ambiguity in areagrams is virtually impossible to explain, A2 is better and more neutral than any other biogeographical assumption.

  16. The extended evolutionary synthesis: its structure, assumptions and predictions

    Science.gov (United States)

    Laland, Kevin N.; Uller, Tobias; Feldman, Marcus W.; Sterelny, Kim; Müller, Gerd B.; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-01-01

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the ‘extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism–environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. PMID:26246559

  17. Time derivatives of the spectrum: Relaxing the stationarity assumption

    Science.gov (United States)

    Prieto, G. A.; Thomson, D. J.; Vernon, F. L.

    2005-12-01

    Spectrum analysis of seismic waveforms has played a significant role towards the understanding of multiple aspects of Earth structure and earthquake source physics. In recent years the multitaper spectrum estimation approach (Thomson, 1982) has been applied to geophysical problems providing not only reliable estimates of the spectrum, but also estimates of spectral uncertainties (Thomson and Chave, 1991). However, these improved spectral estimates were developed under the assumption of local stationarity and provide an incomplete description of the observed process. It is obvious that due to the intrinsic attenuation of the Earth, the amplitudes, and thus the frequency contents are changing with time as waves pass through a seismic station. There have been incredible improvements in different techniques to analyze non-stationary signals, including wavelet decomposition, Wigner-Ville spectrum and the dual-frequency spectrum. We apply one of the recently developed techniques, the Quadratic Inverse Theory (Thomson, 1990, 1994), combined with the multitaper technique to look at the time derivatives of the spectrum. If the spectrum is reasonably white in a certain bandwidth, using QI theory, we can estimate the derivatives of the spectrum at each frequency. We test synthetic signals to corroborate the approach and apply it the records of small earthquakes at local distances. This is a first approach to try and combine the classical spectrum analysis without the assumption of stationarity that is generally taken.

  18. Technical support document for proposed revision of the model energy code thermal envelope requirements

    Energy Technology Data Exchange (ETDEWEB)

    Conner, C.C.; Lucas, R.G.

    1993-02-01

    This report documents the development of the proposed revision of the council of American Building Officials' (CABO) 1993 supplement to the 1992 Model Energy Code (MEC) (referred to as the 1993 MEC) building thermal envelope requirements for single-family and low-rise multifamily residences. The goal of this analysis was to develop revised guidelines based on an objective methodology that determined the most cost-effective (least total life-cycle cost [LCC]) combination of energy conservation measures (ECMs) for residences in different locations. The ECMs with the lowest LCC were used as a basis for proposing revised MEC maximum U[sub o]-value (thermal transmittance) curves in the MEC format. The changes proposed here affect the requirements for group R'' residences. The group R residences are detached one- and two-family dwellings (referred to as single-family) and all other residential buildings three stories or less (referred to as multifamily).

  19. Technical support document for proposed revision of the model energy code thermal envelope requirements

    Energy Technology Data Exchange (ETDEWEB)

    Conner, C.C.; Lucas, R.G.

    1993-02-01

    This report documents the development of the proposed revision of the council of American Building Officials` (CABO) 1993 supplement to the 1992 Model Energy Code (MEC) (referred to as the 1993 MEC) building thermal envelope requirements for single-family and low-rise multifamily residences. The goal of this analysis was to develop revised guidelines based on an objective methodology that determined the most cost-effective (least total life-cycle cost [LCC]) combination of energy conservation measures (ECMs) for residences in different locations. The ECMs with the lowest LCC were used as a basis for proposing revised MEC maximum U{sub o}-value (thermal transmittance) curves in the MEC format. The changes proposed here affect the requirements for ``group R`` residences. The group R residences are detached one- and two-family dwellings (referred to as single-family) and all other residential buildings three stories or less (referred to as multifamily).

  20. Model of an aquaponic system for minimised water, energy and nitrogen requirements.

    Science.gov (United States)

    Reyes Lastiri, D; Slinkert, T; Cappon, H J; Baganz, D; Staaks, G; Keesman, K J

    2016-01-01

    Water and nutrient savings can be established by coupling water streams between interacting processes. Wastewater from production processes contains nutrients like nitrogen (N), which can and should be recycled in order to meet future regulatory discharge demands. Optimisation of interacting water systems is a complex task. An effective way of understanding, analysing and optimising such systems is by applying mathematical models. The present modelling work aims at supporting the design of a nearly emission-free aquaculture and hydroponic system (aquaponics), thus contributing to sustainable production and to food security for the 21st century. Based on the model, a system that couples 40 m(3) fish tanks and a hydroponic system of 1,000 m(2) can produce 5 tons of tilapia and 75 tons of tomato yearly. The system requires energy to condense and recover evaporated water, for lighting and heating, adding up to 1.3 GJ/m(2) every year. In the suggested configuration, the fish can provide about 26% of the N required in a plant cycle. A coupling strategy that sends water from the fish to the plants in amounts proportional to the fish feed input, reduces the standard deviation of the NO3(-) level in the fish cycle by 35%.

  1. Minimum requirements for predictive pore-network modeling of solute transport in micromodels

    Science.gov (United States)

    Mehmani, Yashar; Tchelepi, Hamdi A.

    2017-10-01

    Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.

  2. Simulation Modeling Requirements for Loss-of-Control Accident Prevention of Turboprop Transport Aircraft

    Science.gov (United States)

    Crider, Dennis; Foster, John V.

    2012-01-01

    . This paper addresses simulation modeling requirements that are unique to turboprop transport aircraft and highlights the growing need for aerodynamic models suitable for stall training for these configurations. A review of prominent accidents that involved aerodynamic stall is used to illustrate various modeling features unique to turboprop configurations and the impact of stall behavior on susceptibility to loss of control that has led to new training requirements. This is followed by an overview of stability and control behavior of straight-wing turboprops, the related aerodynamic characteristics, and a summary of recent experimental studies on icing effects. In addition, differences in flight dynamics behavior between swept-wing jets and straight-wing turboprop configurations are discussed to compare and contrast modeling requirements. Specific recommendations for aerodynamic models along with further research needs and data measurements are also provided. 1

  3. Vitamin D Signaling in the Bovine Immune System: A Model for Understanding Human Vitamin D Requirements

    Directory of Open Access Journals (Sweden)

    Corwin D. Nelson

    2012-03-01

    Full Text Available The endocrine physiology of vitamin D in cattle has been rigorously investigated and has yielded information on vitamin D requirements, endocrine function in health and disease, general metabolism, and maintenance of calcium homeostasis in cattle. These results are relevant to human vitamin D endocrinology. The current debate regarding vitamin D requirements is centered on the requirements for proper intracrine and paracrine vitamin D signaling. Studies in adult and young cattle can provide valuable insight for understanding vitamin D requirements as they relate to innate and adaptive immune responses during infectious disease. In cattle, toll-like receptor recognition activates intracrine and paracrine vitamin D signaling mechanism in the immune system that regulates innate and adaptive immune responses in the presence of adequate 25-hydroxyvitamin D. Furthermore, experiments with mastitis in dairy cattle have provided in vivo evidence for the intracrine vitamin D signaling mechanism in macrophages as well as vitamin D mediated suppression of infection. Epidemiological evidence indicates that circulating concentrations above 32 ng/mL of 25-hydroxyvitamin D are necessary for optimal vitamin D signaling in the immune system, but experimental evidence is lacking for that value. Experiments in cattle can provide that evidence as circulating 25-hydroxyvitamin D concentrations can be experimentally manipulated within ranges that are normal for humans and cattle. Additionally, young and adult cattle can be experimentally infected with bacteria and viruses associated with significant diseases in both cattle and humans. Utilizing the bovine model to further delineate the immunomodulatory role of vitamin D will provide potentially valuable insights into the vitamin D requirements of both humans and cattle, especially as they relate to immune response capacity and infectious disease resistance.

  4. Vitamin D signaling in the bovine immune system: a model for understanding human vitamin D requirements.

    Science.gov (United States)

    Nelson, Corwin D; Reinhardt, Timothy A; Lippolis, John D; Sacco, Randy E; Nonnecke, Brian J

    2012-03-01

    The endocrine physiology of vitamin D in cattle has been rigorously investigated and has yielded information on vitamin D requirements, endocrine function in health and disease, general metabolism, and maintenance of calcium homeostasis in cattle. These results are relevant to human vitamin D endocrinology. The current debate regarding vitamin D requirements is centered on the requirements for proper intracrine and paracrine vitamin D signaling. Studies in adult and young cattle can provide valuable insight for understanding vitamin D requirements as they relate to innate and adaptive immune responses during infectious disease. In cattle, toll-like receptor recognition activates intracrine and paracrine vitamin D signaling mechanism in the immune system that regulates innate and adaptive immune responses in the presence of adequate 25-hydroxyvitamin D. Furthermore, experiments with mastitis in dairy cattle have provided in vivo evidence for the intracrine vitamin D signaling mechanism in macrophages as well as vitamin D mediated suppression of infection. Epidemiological evidence indicates that circulating concentrations above 32 ng/mL of 25-hydroxyvitamin D are necessary for optimal vitamin D signaling in the immune system, but experimental evidence is lacking for that value. Experiments in cattle can provide that evidence as circulating 25-hydroxyvitamin D concentrations can be experimentally manipulated within ranges that are normal for humans and cattle. Additionally, young and adult cattle can be experimentally infected with bacteria and viruses associated with significant diseases in both cattle and humans. Utilizing the bovine model to further delineate the immunomodulatory role of vitamin D will provide potentially valuable insights into the vitamin D requirements of both humans and cattle, especially as they relate to immune response capacity and infectious disease resistance.

  5. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper;

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  6. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    Science.gov (United States)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  7. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  8. Assumption-versus data-based approaches to summarizing species' ranges.

    Science.gov (United States)

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2016-08-04

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  9. Emission inventories and modeling requirements for the development of air quality plans. Application to Madrid (Spain).

    Science.gov (United States)

    Borge, Rafael; Lumbreras, Julio; Pérez, Javier; de la Paz, David; Vedrenne, Michel; de Andrés, Juan Manuel; Rodríguez, Ma Encarnación

    2014-01-01

    Modeling is an essential tool for the development of atmospheric emission abatement measures and air quality plans. Most often these plans are related to urban environments with high emission density and population exposure. However, air quality modeling in urban areas is a rather challenging task. As environmental standards become more stringent (e.g. European Directive 2008/50/EC), more reliable and sophisticated modeling tools are needed to simulate measures and plans that may effectively tackle air quality exceedances, common in large urban areas across Europe, particularly for NO₂. This also implies that emission inventories must satisfy a number of conditions such as consistency across the spatial scales involved in the analysis, consistency with the emission inventories used for regulatory purposes and versatility to match the requirements of different air quality and emission projection models. This study reports the modeling activities carried out in Madrid (Spain) highlighting the atmospheric emission inventory development and preparation as an illustrative example of the combination of models and data needed to develop a consistent air quality plan at urban level. These included a series of source apportionment studies to define contributions from the international, national, regional and local sources in order to understand to what extent local authorities can enforce meaningful abatement measures. Moreover, source apportionment studies were conducted in order to define contributions from different sectors and to understand the maximum feasible air quality improvement that can be achieved by reducing emissions from those sectors, thus targeting emission reduction policies to the most relevant activities. Finally, an emission scenario reflecting the effect of such policies was developed and the associated air quality was modeled. © 2013.

  10. Modeling regulatory policies associated with offshore structure removal requirements in the Gulf of Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Mark J. [Center for Energy Studies, Louisiana State University, Energy Coast and Environment Building, Baton Rouge, LA (United States)

    2008-07-15

    Federal regulations require that a lease in the Outer Continental Shelf of the Gulf of Mexico be cleared of all structures within one year after production on the lease ceases, but in recent years, the Minerals Management Service has begun to encourage operators to remove idle (non-producing) structures on producing leases that are no longer ''economically viable''. At the end of 2003, there were 2175 producing structures, 898 idle (non-producing) structures, and 440 auxiliary (never-producing) structures on 1356 active leases; and 329 idle structures and 65 auxiliary structures on 273 inactive leases. The purpose of this paper is to model the impact of alternative regulatory policies on the removal trends of structures and the inventory of idle iron, and to provide first-order estimates of the cost of each regulatory option. A description of the modeling framework and implementation results is presented. (author)

  11. Efficient Accountable Authority Identity-Based Encryption under Static Complexity Assumptions

    CERN Document Server

    Libert, Benoît

    2008-01-01

    At Crypto'07, Goyal introduced the concept of Accountable Authority Identity-Based Encryption (A-IBE) as a convenient means to reduce the amount of trust in authorities in Identity-Based Encryption (IBE). In this model, if the Private Key Generator (PKG) maliciously re-distributes users' decryption keys, it runs the risk of being caught and prosecuted. Goyal proposed two constructions: a first one based on Gentry's IBE which relies on strong assumptions (such as q-Bilinear Diffie-Hellman Inversion) and a second one resting on the more classical Decision Bilinear Diffie-Hellman (DBDH) assumption but that is too inefficient for practical use. In this work, we propose a new construction that is secure assuming the hardness of the DBDH problem. The efficiency of our scheme is comparable with that of Goyal's main proposal with the advantage of relying on static assumptions (i.e. the strength of which does not depend on the number of queries allowed to the adversary). By limiting the number of adversarial rewinds i...

  12. What lies beneath: underlying assumptions in bioimage analysis.

    Science.gov (United States)

    Pridmore, Tony P; French, Andrew P; Pound, Michael P

    2012-12-01

    The need for plant image analysis tools is established and has led to a steadily expanding literature and set of software tools. This is encouraging, but raises a question: how does a plant scientist with no detailed knowledge or experience of image analysis methods choose the right tool(s) for the task at hand, or satisfy themselves that a suggested approach is appropriate? We believe that too great an emphasis is currently being placed on low-level mechanisms and software environments. In this opinion article we propose that a renewed focus on the core theories and algorithms used, and in particular the assumptions upon which they rely, will better equip plant scientists to evaluate the available resources. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Uncovering Metaethical Assumptions in Bioethical Discourse across Cultures.

    Science.gov (United States)

    Sullivan, Laura Specker

    2016-03-01

    Much of bioethical discourse now takes place across cultures. This does not mean that cross-cultural understanding has increased. Many cross-cultural bioethical discussions are marked by entrenched disagreement about whether and why local practices are justified. In this paper, I argue that a major reason for these entrenched disagreements is that problematic metaethical commitments are hidden in these cross-cultural discourses. Using the issue of informed consent in East Asia as an example of one such discourse, I analyze two representative positions in the discussion and identify their metaethical commitments. I suggest that the metaethical assumptions of these positions result from their shared method of ethical justification: moral principlism. I then show why moral principlism is problematic in cross-cultural analyses and propose a more useful method for pursuing ethical justification across cultures.

  14. Posttraumatic Growth and Shattered World Assumptions Among Ex-POWs

    DEFF Research Database (Denmark)

    Lahav, Y.; Bellin, Elisheva S.; Solomon, Z.

    2016-01-01

    PTG and WAs. Method: Former prisoners of war (ex-POWs; n = 158) and comparable controls (n = 106) were assessed 38 years after the Yom Kippur War. Results: Ex-POWs endorsed more negative WAs and higher PTG and dissociation compared to controls. Ex-POWs with posttraumatic stress disorder (PTSD...... world assumptions (WAs) and that the co-occurrence of high PTG and negative WAs among trauma survivors reflects reconstruction of an integrative belief system. The present study aimed to test these claims by investigating, for the first time, the mediating role of dissociation in the relation between......) endorsed negative WAs and a higher magnitude of PTG and dissociation, compared to both ex-POWs without PTSD and controls. WAs were negatively correlated with dissociation and positively correlated with PTG. PTG was positively correlated with dissociation. Moreover, dissociation fully mediated...

  15. New media in strategy – mapping assumptions in the field

    DEFF Research Database (Denmark)

    Gulbrandsen, Ib Tunby; Plesner, Ursula; Raviola, Elena

    2017-01-01

    in relation to the outside or the inside of the organization. After discussing the literature according to these dimensions (deterministic/volontaristic) and (internal/external), the article argues for a sociomaterial approach to strategy and strategy making and for using the concept of affordances......There is plenty of empirical evidence for claiming that new media make a difference for how strategy is conceived and executed. Furthermore, there is a rapidly growing body of literature that engages with this theme, and offers recommendations regarding the appropriate strategic actions in relation...... to new media. By contrast, there is relatively little attention to the assumptions behind strategic thinking in relation to new media. This article reviews the most influential strategy journals, asking how new media are conceptualized. It is shown that strategy scholars have a tendency to place...

  16. Infrastructure requirement of knowledge management system model of statistical learning tool (SLT) for education community

    Science.gov (United States)

    Abdullah, Rusli; Samah, Bahaman Abu; Bolong, Jusang; D'Silva, Jeffrey Lawrence; Shaffril, Hayrol Azril Mohamed

    2014-09-01

    Today, teaching and learning (T&L) using technology as tool is becoming more important especially in the field of statistics as a part of the subject matter in higher education system environment. Eventhough, there are many types of technology of statistical learnig tool (SLT) which can be used to support and enhance T&L environment, however, there is lack of a common standard knowledge management as a knowledge portal for guidance especially in relation to infrastructure requirement of SLT in servicing the community of user (CoU) such as educators, students and other parties who are interested in performing this technology as a tool for their T&L. Therefore, there is a need of a common standard infrastructure requirement of knowledge portal in helping CoU for managing of statistical knowledge in acquiring, storing, desseminating and applying of the statistical knowedge for their specific purposes. Futhermore, by having this infrastructure requirement of knowledge portal model of SLT as a guidance in promoting knowledge of best practise among the CoU, it can also enhance the quality and productivity of their work towards excellence of statistical knowledge application in education system environment.

  17. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Knirsch, Fabian [Salzburg Univ. (Austria); Engel, Dominik [Salzburg Univ. (Austria); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2015-02-17

    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

  18. [In-depth interviews and the Kano model to determine user requirements in a burns unit].

    Science.gov (United States)

    González-Revaldería, J; Holguín-Holgado, P; Lumbreras-Marín, E; Núñez-López, G

    To determine the healthcare requirements of patients in a Burns Unit, using qualitative techniques, such us in-depth personal interviews and Kano's methodology. Qualitative methodology using in-depth personal interviews (12 patients), Kano's conceptual model, and the SERVQHOS questionnaire (24 patients). All patients had been hospitalised in the last 12 months in the Burns Unit. Using Kano's methodology, service attributes were grouped by affinity diagrams, and classified as follows: must-be, attractive (unexpected, great satisfaction), and one-dimensional (linked to the degree of functionality of the service). The outcomes were compared with those obtained with SERVQHOS questionnaire. From the analysis of in-depth interviews, 11 requirements were obtained, referring to hotel aspects, information, need for closer staff relationship, and organisational aspects. The attributes classified as must-be were free television and automatic TV disconnection at midnight. Those classified as attractive were: individual room for more privacy, information about dressing change times in order to avoid anxiety, and additional staff for in-patients. The results were complementary to those obtained with the SERVQHOS questionnaire. In-depth personal interviews provide extra knowledge about patient requirements, complementing the information obtained with questionnaires. With this methodology, a more active patient participation is achieved and the companion's opinion is also taken into account. Copyright © 2016 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. Business Process Modelling is an Essential Part of a Requirements Analysis. Contribution of EFMI Primary Care Working Group.

    Science.gov (United States)

    de Lusignan, S; Krause, P; Michalakidis, G; Vicente, M Tristan; Thompson, S; McGilchrist, M; Sullivan, F; van Royen, P; Agreus, L; Desombre, T; Taweel, A; Delaney, B

    2012-01-01

    To perform a requirements analysis of the barriers to conducting research linking of primary care, genetic and cancer data. We extended our initial data-centric approach to include socio-cultural and business requirements. We created reference models of core data requirements common to most studies using unified modelling language (UML), dataflow diagrams (DFD) and business process modelling notation (BPMN). We conducted a stakeholder analysis and constructed DFD and UML diagrams for use cases based on simulated research studies. We used research output as a sensitivity analysis. Differences between the reference model and use cases identified study specific data requirements. The stakeholder analysis identified: tensions, changes in specification, some indifference from data providers and enthusiastic informaticians urging inclusion of socio-cultural context. We identified requirements to collect information at three levels: micro- data items, which need to be semantically interoperable, meso- the medical record and data extraction, and macro- the health system and socio-cultural issues. BPMN clarified complex business requirements among data providers and vendors; and additional geographical requirements for patients to be represented in both linked datasets. High quality research output was the norm for most repositories. Reference models provide high-level schemata of the core data requirements. However, business requirements' modelling identifies stakeholder issues and identifies what needs to be addressed to enable participation.

  20. Integrating behavioral-motive and experiential-requirement perspectives on psychological needs: a two process model.

    Science.gov (United States)

    Sheldon, Kennon M

    2011-10-01

    Psychological need theories offer much explanatory potential for behavioral scientists, but there is considerable disagreement and confusion about what needs are and how they work. A 2-process model of psychological needs is outlined, viewing needs as evolved functional systems that provide both (a) innate psychosocial motives that tend to impel adaptive behavior and (b) innate experiential requirements that when met reinforce adaptive behavior and promote mental health. The literature is reviewed to find support for 8 hypotheses derived from this model: that certain basic psychosocial motives are present at birth; that successful enactment of these motives supports the functioning and wellness of all humans; that individual differences in these motives develop in childhood; that these strong motive dispositions tend to produce the satisfying experiences they seek; that motive dispositions do not moderate the effect of motive-corresponding need satisfaction on well-being but do moderate the effect of assigned goal-type on rated self-concordance for those goals; that need dissatisfaction and need satisfaction correspond to the separable behavioral-motive and experiential-reward aspects of needs; and that motives and needs can become decoupled when chronic dissatisfaction of particular requirements warps or depresses the corresponding motives, such that the adaptive process fails in its function. Implications for self-determination theory and motive disposition theory are considered.