WorldWideScience

Sample records for pre-inputs assumptions inputs

  1. Three Methods for Extracting Pre-Input Data to DBGrid%在DBGrid中加入辅助录入的方法

    Institute of Scientific and Technical Information of China (English)

    陈朝荣

    2001-01-01

    This paper introduces three ways to extract pre-input data intothe Delphi Component of DBGrid, and points out their characteristics, respectively.%本文介绍三种在Delphi的DBGrid组件中加入辅助录入的方法,并指出各种方法的特点。

  2. Sensitivity of Forward Radiative Transfer Model on Spectroscopic Assumptions and Input Geophysical Parameters at 23.8 GHz and 183 GHz Channels and its Impact on Inter-calibration of Microwave Radiometers

    Science.gov (United States)

    Datta, S.; Jones, W. L.; Ebrahimi, H.; Chen, R.; Payne, V.; Kroodsma, R.

    2014-12-01

    The first step in radiometric inter-calibration is to ascertain the self-consistency and reasonableness of the observed brightness temperature (Tb) for each individual sensor involved. One of the widely used approaches is to compare the observed Tb with a simulated Tb using a forward radiative transfer model (RTM) and input geophysical parameters at the geographic location and time of the observation. In this study we intend to test the sensitivity of the RTM to uncertainties in the input geophysical parameters as well as to the underlying physical assumptions of gaseous absorption and surface emission in the RTM. SAPHIR, a cross track scanner onboard Indo-French Megha-Tropique Satellite, gives us a unique opportunity of studying 6 dual band 183 GHz channels at an inclined orbit over the Tropics for the first time. We will also perform the same sensitivity analysis using the Advance Technology Microwave Sounder (ATMS) 23 GHz and five 183 GHz channels. Preliminary analysis comparing GDAS and an independent retrieved profile show some sensitivity of the RTM to the input data. An extended analysis of this work using different input geophysical parameters will be presented. Two different absorption models, the Rosenkranz and the MonoRTM will be tested to analyze the sensitivity of the RTM to spectroscopic assumptions in each model. Also for the 23.8 GHz channel, the sensitivity of the RTM to the surface emissivity model will be checked. Finally the impact of these sensitivities on radiometric inter-calibration of radiometers at sounding frequencies will be assessed.

  3. Environment Assumptions for Synthesis

    CERN Document Server

    Chatterjee, Krishnendu; Jobstmann, Barbara

    2008-01-01

    The synthesis problem asks to construct a reactive finite-state system from an $\\omega$-regular specification. Initial specifications are often unrealizable, which means that there is no system that implements the specification. A common reason for unrealizability is that assumptions on the environment of the system are incomplete. We study the problem of correcting an unrealizable specification $\\phi$ by computing an environment assumption $\\psi$ such that the new specification $\\psi\\to\\phi$ is realizable. Our aim is to construct an assumption $\\psi$ that constrains only the environment and is as weak as possible. We present a two-step algorithm for computing assumptions. The algorithm operates on the game graph that is used to answer the realizability question. First, we compute a safety assumption that removes a minimal set of environment edges from the graph. Second, we compute a liveness assumption that puts fairness conditions on some of the remaining environment edges. We show that the problem of findi...

  4. Examining Computational Assumptions For Godiva IV

    Energy Technology Data Exchange (ETDEWEB)

    Kirkland, Alexander Matthew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jaegers, Peter James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-11

    Over the course of summer 2016, the effects of several computational modeling assumptions with respect to the Godiva IV reactor were examined. The majority of these assumptions pertained to modeling errors existing in the control rods and burst rod. The Monte Carlo neutron transport code, MCNP, was used to investigate these modeling changes, primarily by comparing them to that of the original input deck specifications.

  5. Interface Input/Output Automata: Splitting Assumptions from Guarantees

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Nyman, Ulrik; Wasowski, Andrzej

    2006-01-01

    We propose a new look at one of the most fundamental types of behavioral interfaces: discrete time specifications of communication---directly related to the work of de Alfaro and Henzinger [3]. Our framework is concerned with distributed non-blocking asynchronous systems in the style of Lynch's \\...

  6. Linking assumptions in amblyopia

    Science.gov (United States)

    LEVI, DENNIS M.

    2017-01-01

    Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956

  7. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  8. Different Random Distributions Research on Logistic-Based Sample Assumption

    Directory of Open Access Journals (Sweden)

    Jing Pan

    2014-01-01

    Full Text Available Logistic-based sample assumption is proposed in this paper, with a research on different random distributions through this system. It provides an assumption system of logistic-based sample, including its sample space structure. Moreover, the influence of different random distributions for inputs has been studied through this logistic-based sample assumption system. In this paper, three different random distributions (normal distribution, uniform distribution, and beta distribution are used for test. The experimental simulations illustrate the relationship between inputs and outputs under different random distributions. Thereafter, numerical analysis infers that the distribution of outputs depends on that of inputs to some extent, and this assumption system is not independent increment process, but it is quasistationary.

  9. Disastrous assumptions about community disasters

    Energy Technology Data Exchange (ETDEWEB)

    Dynes, R.R. [Univ. of Delaware, Newark, DE (United States). Disaster Research Center

    1995-12-31

    Planning for local community disasters is compounded with erroneous assumptions. Six problematic models are identified: agent facts, big accident, end of the world, media, command and control, administrative. Problematic assumptions in each of them are identified. A more adequate model centered on problem solving is identified. That there is a discrepancy between disaster planning efforts and the actual response experience seems rather universal. That discrepancy is symbolized by the graffiti which predictably surfaces on many walls in post disaster locations -- ``First the earthquake, then the disaster.`` That contradiction is seldom reduced as a result of post disaster critiques, since the most usual conclusion is that the plan was adequate but the ``people`` did not follow it. Another explanation will be provided here. A more plausible explanation for failure is that most planning efforts adopt a number of erroneous assumptions which affect the outcome. Those assumptions are infrequently changed or modified by experience.

  10. Test of Poisson Failure Assumption.

    Science.gov (United States)

    1982-09-01

    o. ....... 37 00/ D itlr.: DVI r TEST OF POISSON FAILURE ASSUMPTION Chapter 1. INTRODUCTION 1.1 Background. In stockage models... precipitates a regular failure pattern; it is also possible that the coding of scheduled vs unscheduled does not reflect what we would expect. Data

  11. Sampling Assumptions in Inductive Generalization

    Science.gov (United States)

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  12. Modern Cosmology: Assumptions and Limits

    Science.gov (United States)

    Hwang, Jai-Chan

    2012-06-01

    Physical cosmology tries to understand the Universe at large with its origin and evolution. Observational and experimental situations in cosmology do not allow us to proceed purely based on the empirical means. We examine in which sense our cosmological assumptions in fact have shaped our current cosmological worldview with consequent inevitable limits. Cosmology, as other branches of science and knowledge, is a construct of human imagination reflecting the popular belief system of the era. The question at issue deserves further philosophic discussions. In Whitehead's words, ``philosophy, in one of its functions, is the critic of cosmologies.'' (Whitehead 1925).

  13. Modern Cosmology: Assumptions and Limits

    CERN Document Server

    Hwang, Jai-chan

    2012-01-01

    Physical cosmology tries to understand the Universe at large with its origin and evolution. Observational and experimental situations in cosmology do not allow us to proceed purely based on the empirical means. We examine in which sense our cosmological assumptions in fact have shaped our current cosmological worldview with consequent inevitable limits. Cosmology, as other branches of science and knowledge, is a construct of human imagination reflecting the popular belief system of the era. The question at issue deserves further philosophic discussions. In Whitehead's words, "philosophy, in one of its functions, is the critic of cosmologies". (Whitehead 1925)

  14. Challenged assumptions and invisible effects

    DEFF Research Database (Denmark)

    Wimmelmann, Camilla Lawaetz; Vitus, Kathrine; Jervelund, Signe Smith

    2017-01-01

    for the implementation—different from the assumed conditions—not only challenge the implementation of the intervention but also potentially produce unanticipated yet valuable effects. Research implications – Newly arrived immigrants represent a hugely diverse and heterogeneous group of people with differing values...... of two complete intervention courses and an analysis of the official intervention documents. Findings – This case study exemplifies how the basic normative assumptions behind an immigrant-oriented intervention and the intrinsic power relations therein may be challenged and negotiated by the participants....... In particular, the assumed (power) relations inherent in immigrant-oriented educational health interventions, in which immigrants are in a novice position, are challenged, as the immigrants are experienced adults (and parents) in regard to healthcare. The paper proposes that such unexpected conditions...

  15. Faulty assumptions for repository requirements

    Energy Technology Data Exchange (ETDEWEB)

    Sutcliffe, W G

    1999-06-03

    Long term performance requirements for a geologic repository for spent nuclear fuel and high-level waste are based on assumptions concerning water use and subsequent deaths from cancer due to ingesting water contaminated with radio isotopes ten thousand years in the future. This paper argues that the assumptions underlying these requirements are faulty for a number of reasons. First, in light of the inevitable technological progress, including efficient desalination of water, over the next ten thousand years, it is inconceivable that a future society would drill for water near a repository. Second, even today we would not use water without testing its purity. Third, today many types of cancer are curable, and with the rapid progress in medical technology in general, and the prevention and treatment of cancer in particular, it is improbable that cancer caused by ingesting contaminated water will be a sign&ant killer in the far future. This paper reviews the performance requirements for geological repositories and comments on the difficulties in proving compliance in the face of inherent uncertainties. The already tiny long-term risk posed by a geologic repository is presented and contrasted with contemporary every day risks. A number of examples of technological progress, including cancer treatments, are advanced. The real and significant costs resulting from the overly conservative requirements are then assessed. Examples are given of how money (and political capital) could be put to much better use to save lives today and in the future. It is concluded that although a repository represents essentially no long-term risk, monitored retrievable dry storage (above or below ground) is the current best alternative for spent fuel and high-level nuclear waste.

  16. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...

  17. A "unity assumption" does not promote intersensory integration.

    Science.gov (United States)

    Misceo, Giovanni F; Taylor, Nathanael J

    2011-01-01

    An account of intersensory integration is premised on knowing that different sensory inputs arise from the same object. Could, however, the combination of the inputs be impaired although the "unity assumption" holds? Forty observers viewed a square through a minifying (50%) lens while they simultaneously touched the square. Half could see and half could not see their haptic explorations of the square. Both groups, however, had reason to believe that they were touching and viewing the same square. Subsequent matches of the inspected square were mutually biased by touch and vision when the exploratory movements were visible. However, the matches were biased in the direction of the square's haptic size when observers could not see their exploratory movements. This impaired integration without the visible haptic explorations suggests that the unity assumption alone is not enough to promote intersensory integration.

  18. How Symmetrical Assumptions Advance Strategic Management Research

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Hallberg, Hallberg

    2014-01-01

    We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other appl...

  19. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...... no assumptionson the missingness mechanism, for which we use our recently proposed AI \\& M algorithm. We present experimental results on synthetic data that show that our approximate test statistic is a good indicator for whether data is mar relative to the given distributional assumptions....

  20. The Self in Guidance: Assumptions and Challenges.

    Science.gov (United States)

    Edwards, Richard; Payne, John

    1997-01-01

    Examines the assumptions of "self" made in the professional and managerial discourses of guidance. Suggests that these assumptions obstruct the capacity of guidance workers to explain their own practices. Drawing on contemporary debates over identity, modernity, and postmodernity, argues for a more explicit debate about the self in guidance. (RJM)

  1. Assumptions of Multiple Regression: Correcting Two Misconceptions

    Directory of Open Access Journals (Sweden)

    Matt N. Williams

    2013-09-01

    Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.

  2. Wrong assumptions in the financial crisis

    NARCIS (Netherlands)

    Aalbers, M.B.

    2009-01-01

    Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to leve

  3. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, Rick [ICF International, Fairfax, VA (United States); Bluestein, Joel [ICF International, Fairfax, VA (United States); Rodriguez, Nick [ICF International, Fairfax, VA (United States); Knoke, Stu [ICF International, Fairfax, VA (United States)

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  4. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  5. New Cryptosystem Using Multiple Cryptographic Assumptions

    Directory of Open Access Journals (Sweden)

    E. S. Ismail

    2011-01-01

    Full Text Available Problem statement: A cryptosystem is a way for a sender and a receiver to communicate digitally by which the sender can send receiver any confidential or private message by first encrypting it using the receiver’s public key. Upon receiving the encrypted message, the receiver can confirm the originality of the message’s contents using his own secret key. Up to now, most of the existing cryptosystems were developed based on a single cryptographic assumption like factoring, discrete logarithms, quadratic residue or elliptic curve discrete logarithm. Although these schemes remain secure today, one day in a near future they may be broken if one finds a polynomial algorithm that can efficiently solve the underlying cryptographic assumption. Approach: By this motivation, we designed a new cryptosystem based on two cryptographic assumptions; quadratic residue and discrete logarithms. We integrated these two assumptions in our encrypting and decrypting equations so that the former depends on one public key whereas the latter depends on one corresponding secret key and two secret numbers. Each of public and secret keys in our scheme determines the assumptions we use. Results: The newly developed cryptosystem is shown secure against the three common considering algebraic attacks using a heuristic security technique. The efficiency performance of our scheme requires 2Texp+2Tmul +Thash time complexity for encryption and Texp+2Tmul+Tsrt time complexity for decryption and this magnitude of complexity is considered minimal for multiple cryptographic assumptions-like cryptosystems. Conclusion: The new cryptosystem based on multiple cryptographic assumptions offers a greater security level than that schemes based on a single cryptographic assumption. The adversary has to solve the two assumptions simultaneously to recover the original message from the received corresponding encrypted message but this is very unlikely to happen.

  6. STABILITY ANALYSIS OF THE DYNAMIC INPUT-OUTPUT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    GuoChonghui; TangHuanwen

    2002-01-01

    The dynamic input-output model is well known in economic theory and practice. In this paper, the asymptotic stability and balanced growth solutions of the dynamic input-output system are considered. Under some natural assumptions which do not require the technical coefficient matrix to be indecomposable,it has been proved that the dynamic input-output system is not asymptotically stable and the closed dynamic input-output model has a balanced growth solution.

  7. A Cmparison of Closed World Assumptions

    Institute of Scientific and Technical Information of China (English)

    沈一栋

    1992-01-01

    In this Paper.we introduce a notion of the family of closed world assumptions and compare several well-known closed world approaches in the family to the extent to whic an incomplete database is com pleted.

  8. Shattering world assumptions: A prospective view of the impact of adverse events on world assumptions.

    Science.gov (United States)

    Schuler, Eric R; Boals, Adriel

    2016-05-01

    Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record

  9. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.

    2015-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.

  10. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    2008-01-01

    thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...... in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...

  11. Causal Mediation Analysis: Warning! Assumptions Ahead

    Science.gov (United States)

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  12. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    2008-01-01

    in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...

  13. Mexican-American Cultural Assumptions and Implications.

    Science.gov (United States)

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…

  14. The homogeneous marginal utility of income assumption

    NARCIS (Netherlands)

    Demuynck, T.

    2015-01-01

    We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and facilit

  15. Mexican-American Cultural Assumptions and Implications.

    Science.gov (United States)

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…

  16. Culturally Biased Assumptions in Counseling Psychology

    Science.gov (United States)

    Pedersen, Paul B.

    2003-01-01

    Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…

  17. Extracurricular Business Planning Competitions: Challenging the Assumptions

    Science.gov (United States)

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  18. Wave energy input into the Ekman layer

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper is concerned with the wave energy input into the Ekman layer, based on 3 observational facts that surface waves could significantly affect the profile of the Ekman layer. Under the assumption of constant vertical diffusivity, the analytical form of wave energy input into the Ekman layer is derived. Analysis of the energy balance shows that the energy input to the Ekman layer through the wind stress and the interaction of the Stokes-drift with planetary vorticity can be divided into two kinds. One is the wind energy input, and the other is the wave energy input which is dependent on wind speed, wave characteristics and the wind direction relative to the wave direction. Estimates of wave energy input show that wave energy input can be up to 10% in high-latitude and high-wind speed areas and higher than 20% in the Antarctic Circumpolar Current, compared with the wind energy input into the classical Ekman layer. Results of this paper are of significance to the study of wave-induced large scale effects.

  19. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    CERN Document Server

    Côté, Benoit; Ritter, Christian; Herwig, Falk; Venn, Kim A

    2016-01-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of Type Ia supernovae and the strength of gal...

  20. Do efficiency scores depend on input mix?

    DEFF Research Database (Denmark)

    Asmild, Mette; Hougaard, Jens Leth; Kronborg, Dorte

    2013-01-01

    In this paper we examine the possibility of using the standard Kruskal-Wallis (KW) rank test in order to evaluate whether the distribution of efficiency scores resulting from Data Envelopment Analysis (DEA) is independent of the input (or output) mix of the observations. Since the DEA frontier...... is estimated, many standard assumptions for evaluating the KW test statistic are violated. Therefore, we propose to explore its statistical properties by the use of simulation studies. The simulations are performed conditional on the observed input mixes. The method, unlike existing approaches...... the assumption of mix independence is rejected the implication is that it, for example, is impossible to determine whether machine intensive project are more or less efficient than labor intensive projects....

  1. The OPERA hypothesis: assumptions and clarifications.

    Science.gov (United States)

    Patel, Aniruddh D

    2012-04-01

    Recent research suggests that musical training enhances the neural encoding of speech. Why would musical training have this effect? The OPERA hypothesis proposes an answer on the basis of the idea that musical training demands greater precision in certain aspects of auditory processing than does ordinary speech perception. This paper presents two assumptions underlying this idea, as well as two clarifications, and suggests directions for future research.

  2. On distributional assumptions and whitened cosine similarities

    DEFF Research Database (Denmark)

    Loog, Marco

    2008-01-01

    Recently, an interpretation of the whitened cosine similarity measure as a Bayes decision rule was proposed (C. Liu, "The Bayes Decision Rule Induced Similarity Measures,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1086-1090, June 2007. This communication makes th...... the observation that some of the distributional assumptions made to derive this measure are very restrictive and, considered simultaneously, even inconsistent....

  3. How to Handle Assumptions in Synthesis

    Directory of Open Access Journals (Sweden)

    Roderick Bloem

    2014-07-01

    Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

  4. DDH-like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike;

    2011-01-01

    generalized to use instead d-DDH, and we show in the generic group model that d-DDH is harder than DDH. This means that virtually any application of DDH can now be realized with the same (amortized) efficiency, but under a potentially weaker assumption. On the negative side, we also show that d-DDH, just like...... DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...... and that in fact the problems become harder with increasing d and hence form an infinite hierarchy. We show that hardness of VDDH implies CCA-secure encryption, efficient Naor-Reingold style pseudorandom functions, and auxiliary input secure encryption, a strong form of leakage resilience. This can be seen...

  5. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  6. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Science.gov (United States)

    Hsu, Anne; Griffiths, Thomas L

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  7. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  8. Closed World Assumption for Disjunctive Reasoning

    Institute of Scientific and Technical Information of China (English)

    WANG Kewen; ZHOU Lizhu

    2001-01-01

    In this paper, the relationship between argumentation and closed world reasoning for disjunctive information is studied. In particular, the authors propose a simple and intuitive generalization of the closed world assumption (CWA) for general disjunctive deductive databases (with default negation). This semantics,called DCWA, allows a natural argumentation-based interpretation and can be used to represent reasoning for disjunctive information. We compare DCWA with GCWA and prove that DCWA extends Minker's GCWA to the class of disjunctive databases with default negation. Also we compare our semantics with some related approaches.In addition, the computational complexity of DCWA is investigated.

  9. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  10. 39 Questionable Assumptions in Modern Physics

    Science.gov (United States)

    Volk, Greg

    2009-03-01

    The growing body of anomalies in new energy, low energy nuclear reactions, astrophysics, atomic physics, and entanglement, combined with the failure of the Standard Model and string theory to predict many of the most basic fundamental phenomena, all point to a need for major new paradigms. Not Band-Aids, but revolutionary new ways of conceptualizing physics, in the spirit of Thomas Kuhn's The Structure of Scientific Revolutions. This paper identifies a number of long-held, but unproven assumptions currently being challenged by an increasing number of alternative scientists. Two common themes, both with venerable histories, keep recurring in the many alternative theories being proposed: (1) Mach's Principle, and (2) toroidal, vortex particles. Matter-based Mach's Principle differs from both space-based universal frames and observer-based Einsteinian relativity. Toroidal particles, in addition to explaining electron spin and the fundamental constants, satisfy the basic requirement of Gauss's misunderstood B Law, that motion itself circulates. Though a comprehensive theory is beyond the scope of this paper, it will suggest alternatives to the long list of assumptions in context.

  11. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requires...... it as input for basic collexeme collostructional analysis (Stefanowitsch & Gries 2003) in Gries' (2007) program. ColloInputGenerator is, in its current state, based on programming commands introduced in Gries (2009). Projected updates: Generation of complete work-ready frequency lists....

  12. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    Science.gov (United States)

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  13. Catalyst Deactivation: Control Relevance of Model Assumptions

    Directory of Open Access Journals (Sweden)

    Bernt Lie

    2000-10-01

    Full Text Available Two principles for describing catalyst deactivation are discussed, one based on the deactivation mechanism, the other based on the activity and catalyst age distribution. When the model is based upon activity decay, it is common to use a mean activity developed from the steady-state residence time distribution. We compare control-relevant properties of such an approach with those of a model based upon the deactivation mechanism. Using a continuous stirred tank reactor as an example, we show that the mechanistic approach and the population balance approach lead to identical models. However, common additional assumptions used for activity-based models lead to model properties that may deviate considerably from the correct one.

  14. Inference and Assumption in Historical Seismology

    Science.gov (United States)

    Musson, R. M. W.

    The principal aim in studies of historical earthquakes is usually to be able to derive parameters for past earthquakes from macroseismic or other data and thus extend back in time parametric earthquake catalogues, often with improved seismic hazard studies as the ultimate goal. In cases of relatively recent historical earthquakes, for example, those of the 18th and 19th centuries, it is often the case that there is such an abundance of available macroseismic data that estimating earthquake parameters is relatively straightforward. For earlier historical periods, especially medieval and earlier, and also for areas where settlement or documentation are sparse, the situation is much harder. The seismologist often finds that he has only a few data points (or even one) for an earthquake that nevertheless appears to be regionally significant.In such cases, it is natural that the investigator will attempt to make the most of the available data, expanding it by making working assumptions, and from these deriving conclusions by inference (i.e. the process of proceeding logically from some premise). This can be seen in a number of existing studies; in some cases extremely slight data are so magnified by the use of inference that one must regard the results as tentative in the extreme. Two main types of inference can be distinguished. The first type is inference from documentation. This is where assumptions are made such as: the absence of a report of the earthquake from this monastic chronicle indicates that at this locality the earthquake was not felt. The second type is inference from seismicity. Here one deals with arguments such as all recent earthquakes felt at town X are events occurring in seismic zone Y, therefore this ancient earthquake which is only reported at town X probably also occurred in this zone.

  15. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    Science.gov (United States)

    Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.

    2017-02-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA

  16. Towards New Probabilistic Assumptions in Business Intelligence

    Directory of Open Access Journals (Sweden)

    Schumann Andrew

    2015-01-01

    Full Text Available One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot be observable and additive in principle. These variables can be called symbolic values or symbolic meanings and studied within symbolic interactionism, the theory developed since George Herbert Mead and Herbert Blumer. In statistical and econometric tools of business intelligence we accept only phenomena with causal connections measured by additive measures. In the paper we show that in the social world we deal with symbolic interactions which can be studied by non-additive labels (symbolic meanings or symbolic values. For accepting the variety of such phenomena we should avoid additivity of basic labels and construct a new probabilistic method in business intelligence based on non-Archimedean probabilities.

  17. Modelling sexual transmission of HIV: testing the assumptions, validating the predictions

    Science.gov (United States)

    Baggaley, Rebecca F.; Fraser, Christophe

    2010-01-01

    Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600

  18. Roy's specific life values and the philosophical assumption of humanism.

    Science.gov (United States)

    Hanna, Debra R

    2013-01-01

    Roy's philosophical assumption of humanism, which is shaped by the veritivity assumption, is considered in terms of her specific life values and in contrast to the contemporary view of humanism. Like veritivity, Roy's philosophical assumption of humanism unites a theocentric focus with anthropological values. Roy's perspective enriches the mainly secular, anthropocentric assumption. In this manuscript, the basis for Roy's perspective of humanism will be discussed so that readers will be able to use the Roy adaptation model in an authentic manner.

  19. Are Financial Variables Inputs in Delivered Production Functions? Are Financial Variables Inputs in Delivered Production Functions?

    Directory of Open Access Journals (Sweden)

    Miguel Kiguel

    1995-03-01

    Full Text Available Fischer's classic (1974 paper develops conditions under which it is appropriate to use money as an input in a 'delivered' production function. In this paper, we extend Fischer's model I (the Baumol-Tobin inventory approach by incorporating credit into the analysis. Our investigation of the extended model brings out a very restrictive but necessary implicit assumption employed by Fischer to treat money as an input. Namely. that there exists a binding constraint on the use of money! A similar result holds for our more general model. Fischer's classic (1974 paper develops conditions under which it is appropriate to use money as an input in a 'delivered' production function. In this paper, we extend Fischer's model I (the Baumol-Tobin inventory approach by incorporating credit into the analysis. Our investigation of the extended model brings out a very restrictive but necessary implicit assumption employed by Fischer to treat money as an input. Namely. that there exists a binding constraint on the use of money! A similar result holds for our more general model.

  20. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, Rampal S.; Alonso, David; McKane, Alan J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the a

  1. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, R.S.; Alonso, D.; McKane, A.J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the

  2. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  3. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  4. Wage Differentials among Workers in Input-Output Models.

    Science.gov (United States)

    Filippini, Luigi

    1981-01-01

    Using an input-output framework, the author derives hypotheses on wage differentials based on the assumption that human capital (in this case, education) will explain workers' wage differentials. The hypothetical wage differentials are tested on data from the Italian economy. (RW)

  5. Input or intimacy

    Directory of Open Access Journals (Sweden)

    Judit Navracsics

    2014-01-01

    Full Text Available According to the critical period hypothesis, the earlier the acquisition of a second language starts, the better. Owing to the plasticity of the brain, up until a certain age a second language can be acquired successfully according to this view. Early second language learners are commonly said to have an advantage over later ones especially in phonetic/phonological acquisition. Native-like pronunciation is said to be most likely to be achieved by young learners. However, there is evidence of accentfree speech in second languages learnt after puberty as well. Occasionally, on the other hand, a nonnative accent may appear even in early second (or third language acquisition. Cross-linguistic influences are natural in multilingual development, and we would expect the dominant language to have an impact on the weaker one(s. The dominant language is usually the one that provides the largest amount of input for the child. But is it always the amount that counts? Perhaps sometimes other factors, such as emotions, ome into play? In this paper, data obtained from an EnglishPersian-Hungarian trilingual pair of siblings (under age 4 and 3 respectively is analyzed, with a special focus on cross-linguistic influences at the phonetic/phonological levels. It will be shown that beyond the amount of input there are more important factors that trigger interference in multilingual development.

  6. Legal assumptions for private company claim for additional (supplementary payment

    Directory of Open Access Journals (Sweden)

    Šogorov Stevan

    2011-01-01

    Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.

  7. Access to Research Inputs

    DEFF Research Database (Denmark)

    Czarnitzki, Dirk; Grimpe, Christoph; Pellens, Maikel

    The viability of modern open science norms and practices depend on public disclosure of new knowledge, methods, and materials. However, increasing industry funding of research can restrict the dissemination of results and materials. We show, through a survey sample of 837 German scientists in life...... sciences, natural sciences, engineering, and social sciences, that scientists who receive industry funding are twice as likely to deny requests for research inputs as those who do not. Receiving external funding in general does not affect denying others access. Scientists who receive external funding...... of any kind are, however, 50% more likely to be denied access to research materials by others, but this is not affected by being funded specifically by industry....

  8. Access to Research Inputs

    DEFF Research Database (Denmark)

    Czarnitzki, Dirk; Grimpe, Christoph; Pellens, Maikel

    2015-01-01

    The viability of modern open science norms and practices depends on public disclosure of new knowledge, methods, and materials. However, increasing industry funding of research can restrict the dissemination of results and materials. We show, through a survey sample of 837 German scientists in life...... sciences, natural sciences, engineering, and social sciences, that scientists who receive industry funding are twice as likely to deny requests for research inputs as those who do not. Receiving external funding in general does not affect denying others access. Scientists who receive external funding...... of any kind are, however, 50 % more likely to be denied access to research materials by others, but this is not affected by being funded specifically by industry...

  9. NGNP: High Temperature Gas-Cooled Reactor Key Definitions, Plant Capabilities, and Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Phillip Mills

    2012-02-01

    This document is intended to provide a Next Generation Nuclear Plant (NGNP) Project tool in which to collect and identify key definitions, plant capabilities, and inputs and assumptions to be used in ongoing efforts related to the licensing and deployment of a high temperature gas-cooled reactor (HTGR). These definitions, capabilities, and assumptions are extracted from a number of sources, including NGNP Project documents such as licensing related white papers [References 1-11] and previously issued requirement documents [References 13-15]. Also included is information agreed upon by the NGNP Regulatory Affairs group's Licensing Working Group and Configuration Council. The NGNP Project approach to licensing an HTGR plant via a combined license (COL) is defined within the referenced white papers and reference [12], and is not duplicated here.

  10. Exposing Trust Assumptions in Distributed Policy Enforcement (Briefing Charts)

    Science.gov (United States)

    2016-06-21

    Coordinated defenses appear to be feasible • Writing policies from scratch is hard – Exposing assumptions requires people to think about what assumptions... critical capabilities as: – Adaptation to dynamic service availability – Complex situational dynamics (e.g., differentiating between bot-net and

  11. Co-Dependency: An Examination of Underlying Assumptions.

    Science.gov (United States)

    Myer, Rick A.; And Others

    1991-01-01

    Discusses need for careful examination of codependency as diagnostic category. Critically examines assumptions that codependency is disease, addiction, or predetermined by the environment. Discusses implications of assumptions. Offers recommendations for mental health counselors focusing on need for systematic research, redirection of efforts to…

  12. Co-Dependency: An Examination of Underlying Assumptions.

    Science.gov (United States)

    Myer, Rick A.; And Others

    1991-01-01

    Discusses need for careful examination of codependency as diagnostic category. Critically examines assumptions that codependency is disease, addiction, or predetermined by the environment. Discusses implications of assumptions. Offers recommendations for mental health counselors focusing on need for systematic research, redirection of efforts to…

  13. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Science.gov (United States)

    2010-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of... this part must be based on methods and assumptions that are reasonable in the aggregate, based on...

  14. Special Theory of Relativity without special assumptions and tachyonic motion

    Directory of Open Access Journals (Sweden)

    E. Kapuścik

    2010-01-01

    Full Text Available The most general form of transformations of space-time coordinates in Special Theory of Relativity based solely on physical assumptions is described. Only the linearity of space-time transformations and the constancy of the speed of light are used as assumptions. The application to tachyonic motion is indicated.

  15. 40 CFR 761.2 - PCB concentration assumptions for use.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false PCB concentration assumptions for use..., AND USE PROHIBITIONS General § 761.2 PCB concentration assumptions for use. (a)(1) Any person may..., oil-filled cable, and rectifiers whose PCB concentration is not established contain PCBs at < 50 ppm...

  16. Estimating nonstationary input signals from a single neuronal spike train.

    Science.gov (United States)

    Kim, Hideaki; Shinomoto, Shigeru

    2012-11-01

    Neurons temporally integrate input signals, translating them into timed output spikes. Because neurons nonperiodically emit spikes, examining spike timing can reveal information about input signals, which are determined by activities in the populations of excitatory and inhibitory presynaptic neurons. Although a number of mathematical methods have been developed to estimate such input parameters as the mean and fluctuation of the input current, these techniques are based on the unrealistic assumption that presynaptic activity is constant over time. Here, we propose tracking temporal variations in input parameters with a two-step analysis method. First, nonstationary firing characteristics comprising the firing rate and non-Poisson irregularity are estimated from a spike train using a computationally feasible state-space algorithm. Then, information about the firing characteristics is converted into likely input parameters over time using a transformation formula, which was constructed by inverting the neuronal forward transformation of the input current to output spikes. By analyzing spike trains recorded in vivo, we found that neuronal input parameters are similar in the primary visual cortex V1 and middle temporal area, whereas parameters in the lateral geniculate nucleus of the thalamus were markedly different.

  17. The Kepler Input Catalog

    Science.gov (United States)

    Latham, D. W.; Brown, T. M.; Monet, D. G.; Everett, M.; Esquerdo, G. A.; Hergenrother, C. W.

    2005-12-01

    The Kepler mission will monitor 170,000 planet-search targets during the first year, and 100,000 after that. The Kepler Input Catalog (KIC) will be used to select optimum targets for the search for habitable earth-like transiting planets. The KIC will include all known catalogued stars in an area of about 177 square degrees centered at RA 19:22:40 and Dec +44:30 (l=76.3 and b=+13.5). 2MASS photometry will be supplemented with new ground-based photometry obtained in the SDSS g, r, i, and z bands plus a custom filter centered on the Mg b lines, using KeplerCam on the 48-inch telescope at the Whipple Observatory on Mount Hopkins, Arizona. The photometry will be used to estimate stellar characteristics for all stars brighter than K 14.5 mag. The KIC will include effective temperature, surface gravity, metallicity, reddening, distance, and radius estimates for these stars. The CCD images are pipeline processed to produce instrumental magnitudes at PSI. The photometry is then archived and transformed to the SDSS system at HAO, where the astrophysical analysis of the stellar characteristics is carried out. The results are then merged with catalogued data at the USNOFS to produce the KIC. High dispersion spectroscopy with Hectochelle on the MMT will be used to supplement the information for many of the most interesting targets. The KIC will be released before launch for use by the astronomical community and will be available for queries over the internet. Support from the Kepler mission is gratefully acknowledged.

  18. Asymptotic Stability and Balanced Growth Solution of the Singular Dynamic Input-Output System*

    Institute of Scientific and Technical Information of China (English)

    ChonghuiGuo; HuanwenTang

    2004-01-01

    The dynamic input-output system is well known in economic theory and practice. In this paper the asymptotic stability and balanced growth solution of the dynamic input-output system are considered. Under three natural assumptions, we obtain four theorems about asymptotic stability and balanced growth solution of the dynamic input-output system and bring together in a unified manner some contributions scattered in the literature.

  19. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    Energy Technology Data Exchange (ETDEWEB)

    R.E. Sweeney

    2001-02-08

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance.

  20. Serial Input Output

    Energy Technology Data Exchange (ETDEWEB)

    Waite, Anthony; /SLAC

    2011-09-07

    Serial Input/Output (SIO) is designed to be a long term storage format of a sophistication somewhere between simple ASCII files and the techniques provided by inter alia Objectivity and Root. The former tend to be low density, information lossy (floating point numbers lose precision) and inflexible. The latter require abstract descriptions of the data with all that that implies in terms of extra complexity. The basic building blocks of SIO are streams, records and blocks. Streams provide the connections between the program and files. The user can define an arbitrary list of streams as required. A given stream must be opened for either reading or writing. SIO does not support read/write streams. If a stream is closed during the execution of a program, it can be reopened in either read or write mode to the same or a different file. Records represent a coherent grouping of data. Records consist of a collection of blocks (see next paragraph). The user can define a variety of records (headers, events, error logs, etc.) and request that any of them be written to any stream. When SIO reads a file, it first decodes the record name and if that record has been defined and unpacking has been requested for it, SIO proceeds to unpack the blocks. Blocks are user provided objects which do the real work of reading/writing the data. The user is responsible for writing the code for these blocks and for identifying these blocks to SIO at run time. To write a collection of blocks, the user must first connect them to a record. The record can then be written to a stream as described above. Note that the same block can be connected to many different records. When SIO reads a record, it scans through the blocks written and calls the corresponding block object (if it has been defined) to decode it. Undefined blocks are skipped. Each of these categories (streams, records and blocks) have some characteristics in common. Every stream, record and block has a name with the condition that each

  1. Supporting calculations and assumptions for use in WESF safetyanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  2. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases...... literature, and few contributions represent the three remaining discourses, which unjustifiably leaves out issues that research could and most probably should investigate. In order to highlight research potentials, limitations, and underlying assumptions of each discourse, we develop four IT PPM metaphors...

  3. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    Swarup Mohalik; R Ramanujam

    2002-04-01

    We propose a class of finite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the commitments offered by the other at that state. We model examples like reliable bit transmission and sequence transmission protocols in this framework and discuss how assumption-commitment structure facilitates compositional design of such protocols. We prove a decomposition theorem which states that every protocol specified globally as a finite state system can be decomposed into such an assumption compatible system. We also present a syntactic characterization of this class using top level parallel composition.

  4. Tails assumptions and posterior concentration rates for mixtures of Gaussians

    OpenAIRE

    Naulet, Zacharie; Rousseau, Judith

    2016-01-01

    Nowadays in density estimation, posterior rates of convergence for location and location-scale mixtures of Gaussians are only known under light-tail assumptions; with better rates achieved by location mixtures. It is conjectured, but not proved, that the situation should be reversed under heavy tails assumptions. The conjecture is based on the feeling that there is no need to achieve a good order of approximation in regions with few data (say, in the tails), favoring location-scale mixtures w...

  5. US Intervention in Failed States: Bad Assumptions=Poor Outcomes

    Science.gov (United States)

    2002-01-01

    NATIONAL DEFENSE UNIVERSITY NATIONAL WAR COLLEGE STRATEGIC LOGIC ESSAY US INTERVENTION IN FAILED STATES: BAD ASSUMPTIONS = POOR ...2002 2. REPORT TYPE 3. DATES COVERED 00-00-2002 to 00-00-2002 4. TITLE AND SUBTITLE US Intervention in Failed States: Bad Assumptions= Poor ...country remains in the grip of poverty , natural disasters, and stagnation. Rwanda Rwanda, another small African country, is populated principally

  6. Input in Second Language Acquisition.

    Science.gov (United States)

    Gass, Susan M., Ed.; Madden, Carolyn G., Ed.

    This collection of conference papers includes: "When Does Teacher Talk Work as Input?"; "Cultural Input in Second Language Learning"; "Skilled Variation in a Kindergarten Teacher's Use of Foreigner Talk"; "Teacher-Pupil Interaction in Second Language Development"; "Foreigner Talk in the University…

  7. Inputs for L2 Acquisition.

    Science.gov (United States)

    Saleemi, Anjum P.

    1989-01-01

    Major approaches of describing or examining linguistic data from a potential target language (input) are analyzed for adequacy in addressing the concerns of second language learning theory. Suggestions are made for making the best of these varied concepts of input and for reformulation of a unified concept. (MSE)

  8. Input and Second Language Acquisition

    Institute of Scientific and Technical Information of China (English)

    周笑盈

    2011-01-01

    The behaviorist, the mentalist and the interactionist have different emphases on the role input in Second Language Acquisition. In order to protrude the importance of second language teaching, it is indispensible to discuss the characteristics of input and to explore its effects.

  9. Changing Assumptions and Progressive Change in Theories of Strategic Organization

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Hallberg, Niklas L.

    2016-01-01

    are often decoupled from the results of empirical testing, changes in assumptions seem closely intertwined with theoretical progress. Using the case of the resource-based view, we suggest that progressive change in theories of strategic organization may come about as a result of scholarly debate and dispute...... over what constitutes proper assumptions—even in the absence of corroborating or falsifying empirical evidence. We also discuss how changing assumptions may drive future progress in the resource-based view.......A commonly held view is that strategic organization theories progress as a result of a Popperian process of bold conjectures and systematic refutations. However, our field also witnesses vibrant debates or disputes about the specific assumptions that our theories rely on, and although these debates...

  10. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  11. On the Necessary and Sufficient Assumptions for UC Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus; Orlandi, Claudio

    2010-01-01

    We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the uniform random string model and the key-registration authority model (KRA), and provide new results...... for all of them. Perhaps most interestingly we show that: •  For even the minimal meaningful KRA, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious......-transfer protocol for the stand-alone model. Since a KRA where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the KRA model: any non-trivial KRA is sufficient for UC computation. •  We show...

  12. More Efficient VLR Group Signature Based on DTDH Assumption

    Directory of Open Access Journals (Sweden)

    Lizhen Ma

    2012-10-01

    Full Text Available In VLR (verifier-local revocation group signature, only verifiers are involved in the revocation of a member, while signers are not. Thus the VLR group signature schemes are suitable for mobile environments. To meet the requirement of speediness, reducing computation costs and shortening signature length are two requirements at the current research of VLR group signatures. A new VLR group signature is proposed based on q-SDH assumption and DTDH assumption. Compared with the existing VLR group signatures based on DTDH assumption, the  proposed scheme not only has the shortest signature size, but also has the lowest computation costs , and can be applicable to mobile environments such as IEEE 802.1x.  

  13. Evolution of Requirements and Assumptions for Future Exploration Missions

    Science.gov (United States)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  14. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases......: (1) IT PPM as the top management marketplace, (2) IT PPM as the cause of social dilemmas at the lower organizational levels (3) IT PPM as polity between different organizational interests, (4) IT PPM as power relations that suppress creativity and diversity. Our metaphors can be used by practitioners...

  15. Input management of production systems.

    Science.gov (United States)

    Odum, E P

    1989-01-13

    Nonpoint sources of pollution, which are largely responsible for stressing regional and global life-supporting atmosphere, soil, and water, can only be reduced (and ultimately controlled) by input management that involves increasing the efficiency of production systems and reducing the inputs of environmentally damaging materials. Input management requires a major change, an about-face, in the approach to management of agriculture, power plants, and industries because the focus is on waste reduction and recycling rather than on waste disposal. For large-scale ecosystem-level situations a top-down hierarchical approach is suggested and illustrated by recent research in agroecology and landscape ecology.

  16. Assumptions behind size-based ecosystem models are realistic

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.;

    2016-01-01

    A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed...... by Froese et al. are realistic and consistent. We further show that the assumption about density-dependence being described by a stock recruitment relationship is responsible for determining whether a peak in the cohort biomass of a population occurs late or early in life. Finally, we argue...

  17. The Advanced LIGO Input Optics

    CERN Document Server

    Mueller, Chris; Ciani, Giacomo; DeRosa, Ryan; Effler, Anamaria; Feldbaum, David; Frolov, Valery; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; King, Eleanor; Kokeyama, Keiko; Korth, William; Martin, Rodica; Mullavey, Adam; Poeld, Jan; Quetschke, Volker; Reitze, David; Tanner, David; Williams, Luke; Mueller, Guido

    2016-01-01

    The Advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they h...

  18. Nonlinear input-output systems

    Science.gov (United States)

    Hunt, L. R.; Luksic, Mladen; Su, Renjeng

    1987-01-01

    Necessary and sufficient conditions that the nonlinear system dot-x = f(x) + ug(x) and y = h(x) be locally feedback equivalent to the controllable linear system dot-xi = A xi + bv and y = C xi having linear output are found. Only the single input and single output case is considered, however, the results generalize to multi-input and multi-output systems.

  19. 41 CFR 60-3.9 - No assumption of validity.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true No assumption of validity. 60-3.9 Section 60-3.9 Public Contracts and Property Management Other Provisions Relating to Public... 3-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.9...

  20. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    Science.gov (United States)

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  1. "Touch Me, Like Me": Testing an Encounter Group Assumption

    Science.gov (United States)

    Boderman, Alvin; And Others

    1972-01-01

    An experiment to test an encounter group assumption that touching increases interpersonal attraction was conducted. College women were randomly assigned to a touch or no-touch condition. A comparison of total evaluation scores verified the hypothesis: subjects who touched the accomplice perceived her as a more attractive person than those who did…

  2. The Metatheoretical Assumptions of Literacy Engagement: A Preliminary Centennial History

    Science.gov (United States)

    Hruby, George G.; Burns, Leslie D.; Botzakis, Stergios; Groenke, Susan L.; Hall, Leigh A.; Laughter, Judson; Allington, Richard L.

    2016-01-01

    In this review of literacy education research in North America over the past century, the authors examined the historical succession of theoretical frameworks on students' active participation in their own literacy learning, and in particular the metatheoretical assumptions that justify those frameworks. The authors used "motivation" and…

  3. Woman's Moral Development in Search of Philosophical Assumptions.

    Science.gov (United States)

    Sichel, Betty A.

    1985-01-01

    Examined is Carol Gilligan's thesis that men and women use different moral languages to resolve moral dilemmas, i.e., women speak a language of caring and responsibility, and men speak a language of rights and justice. Her thesis is not grounded with adequate philosophical assumptions. (Author/RM)

  4. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    Science.gov (United States)

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  5. Deep Borehole Field Test Requirements and Controlled Assumptions.

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  6. Using Contemporary Art to Challenge Cultural Values, Beliefs, and Assumptions

    Science.gov (United States)

    Knight, Wanda B.

    2006-01-01

    Art educators, like many other educators born or socialized within the main-stream culture of a society, seldom have an opportunity to identify, question, and challenge their cultural values, beliefs, assumptions, and perspectives because school culture typically reinforces those they learn at home and in their communities (Bush & Simmons, 1990).…

  7. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    ’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  8. Making Predictions about Chemical Reactivity: Assumptions and Heuristics

    Science.gov (United States)

    Maeyer, Jenine; Talanquer, Vicente

    2013-01-01

    Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…

  9. Unpacking Assumptions in Research Synthesis: A Critical Construct Synthesis Approach

    Science.gov (United States)

    Wolgemuth, Jennifer R.; Hicks, Tyler; Agosto, Vonzell

    2017-01-01

    Research syntheses in education, particularly meta-analyses and best-evidence syntheses, identify evidence-based practices by combining findings across studies whose constructs are similar enough to warrant comparison. Yet constructs come preloaded with social, historical, political, and cultural assumptions that anticipate how research problems…

  10. Challenging Teachers' Pedagogic Practice and Assumptions about Social Media

    Science.gov (United States)

    Cartner, Helen C.; Hallas, Julia L.

    2017-01-01

    This article describes an innovative approach to professional development designed to challenge teachers' pedagogic practice and assumptions about educational technologies such as social media. Developing effective technology-related professional development for teachers can be a challenge for institutions and facilitators who provide this…

  11. Assumptions regarding right censoring in the presence of left truncation.

    Science.gov (United States)

    Qian, Jing; Betensky, Rebecca A

    2014-04-01

    Clinical studies using complex sampling often involve both truncation and censoring, where there are options for the assumptions of independence of censoring and event and for the relationship between censoring and truncation. In this paper, we clarify these choices, show certain equivalences, and provide examples.

  12. DDH-like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2011-01-01

    DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...

  13. Quantum cryptography in real-life applications: Assumptions and security

    Science.gov (United States)

    Zhao, Yi

    Quantum cryptography, or quantum key distribution (QKD), provides a means of unconditionally secure communication. The security is in principle based on the fundamental laws of physics. Security proofs show that if quantum cryptography is appropriately implemented, even the most powerful eavesdropper cannot decrypt the message from a cipher. The implementations of quantum crypto-systems in real life may not fully comply with the assumptions made in the security proofs. Such discrepancy between the experiment and the theory can be fatal to the security of a QKD system. In this thesis we address a number of these discrepancies. A perfect single-photon source is often assumed in many security proofs. However, a weak coherent source is widely used in a real-life QKD implementation. Decoy state protocols have been proposed as a novel approach to dramatically improve the performance of a weak coherent source based QKD implementation without jeopardizing its security. Here, we present the first experimental demonstrations of decoy state protocols. Our experimental scheme was later adopted by most decoy state QKD implementations. In the security proof of decoy state protocols as well as many other QKD protocols, it is widely assumed that a sender generates a phase-randomized coherent state. This assumption has been enforced in few implementations. We close this gap in two steps: First, we implement and verify the phase randomization experimentally; second, we prove the security of a QKD implementation without the coherent state assumption. In many security proofs of QKD, it is assumed that all the detectors on the receiver's side have identical detection efficiencies. We show experimentally that this assumption may be violated in a commercial QKD implementation due to an eavesdropper's malicious manipulation. Moreover, we show that the eavesdropper can learn part of the final key shared by the legitimate users as a consequence of this violation of the assumptions.

  14. Interface Input/Output Automata

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Nyman, Ulrik; Wasowski, Andrzej

    2006-01-01

    Building on the theory of interface automata by de Alfaro and Henzinger we design an interface language for Lynch’s I/O, a popular formalism used in the development of distributed asynchronous systems, not addressed by previous interface research. We introduce an explicit separation of assumptions...... from guarantees not yet seen in other behavioral interface theories. Moreover we derive the composition operator systematically and formally, guaranteeing that the resulting compositions are always the weakest in the sense of assumptions, and the strongest in the sense of guarantees. We also present...... a method for solving systems of relativized behavioral inequalities as used in our setup and draw a formal correspondence between our work and interface automata....

  15. Analysis of one assumption of the Navier-Stokes equations

    CERN Document Server

    Budarin, V A

    2013-01-01

    This article analyses the assumptions regarding the influence of pressure forces during the calculation of the motion of a Newtonian fluid. The purpose of the analysis is to determine the reasonableness of the assumptions and their impact on the results of the analytical calculation. The connections between equations, causes of discrepancies in exact solutions of the Navier-Stokes equations at low Reynolds numbers and the emergence of unstable solutions using computer programs are also addressed. The necessity to complement the well-known equations of motion in mechanical stress requires other equations are substantive. It is shown that there are three methods of solving such a problem and the requirements for the unknown equations are described. Keywords: Navier-Stokes, approximate equation, closing equations, holonomic system.

  16. Testing Modeling Assumptions in the West Africa Ebola Outbreak

    Science.gov (United States)

    Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre

    2016-01-01

    The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance. PMID:27721505

  17. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG...... knowledge there has been no systematic study of the validity of the Markov assumption wrt.\\ web usage mining and the resulting quality of the mined browsing patterns. In this paper we systematically investigate the quality of browsing patterns mined from structures based on the Markov assumption. Formal...... measures of quality, based on the closeness of the mined patterns to the true traversal patterns, are defined and an extensive experimental evaluation is performed, based on two substantial real-world data sets. The results indicate that a large number of rules must be considered to achieve high quality...

  18. The sufficiency assumption of the reasoned approach to action

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2015-12-01

    Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.

  19. Models for waste life cycle assessment: Review of technical assumptions

    DEFF Research Database (Denmark)

    Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky

    2010-01-01

    A number of waste life cycle assessment (LCA) models have been gradually developed since the early 1990s, in a number of countries, usually independently from each other. Large discrepancies in results have been observed among different waste LCA models, although it has also been shown that results......, such as the functional unit, system boundaries, waste composition and energy modelling. The modelling assumptions of waste management processes, ranging from collection, transportation, intermediate facilities, recycling, thermal treatment, biological treatment, and landfilling, are obviously critical when comparing...... waste LCA models. This review infers that some of the differences in waste LCA models are inherent to the time they were developed. It is expected that models developed later, benefit from past modelling assumptions and knowledge and issues. Models developed in different countries furthermore rely...

  20. Testing Modeling Assumptions in the West Africa Ebola Outbreak

    Science.gov (United States)

    Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre

    2016-10-01

    The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.

  1. COGNITIVE INTERPRETATION OF INPUT HYPOTHESIS

    Institute of Scientific and Technical Information of China (English)

    WangHongyue; RenLiankui

    2004-01-01

    Krashen's Input Hypothesis, together with its earlier version, the Monitor Model is an influential theory in Second Language Acquisition research. In his studies, Krashen, on the one hand, emphasizes the part '“ comprehensible input” plays in learning a second language, on the other hand, he simply defines“comprehensible input” as “a little beyond the learner's current level”. What input can be considered as“a little beyond the learner's current level ”? Krashen gives no furtherexplanation. This paper tries to offer a more concrete and more detailed interpretation with Ausubel's Cognitive Assimilation theory.

  2. Input Hypothesis and its Controversy

    Institute of Scientific and Technical Information of China (English)

    金灵

    2016-01-01

    With Krashen's proposal of input hypothesis in 1980s, lots of contributions and further researches have been done in second language acquisition and teaching. Since it is impossible to undertake the exact empirical research to investigate its credibility, lots of criticisms are also aroused to disprove or adjust this hypothesis. However, due to its significant development in SLA, it is still valuable to explore the hypothesis and implications in language teaching to non-native speakers. This paper firstly focuses on the development of the input hypothesis, and then discusses some criticisms of this hypothesis.

  3. Assumptions and realities of the NCLEX-RN.

    Science.gov (United States)

    Aucoin, Julia W; Treas, Leslie

    2005-01-01

    Every three years the National Council of State Boards of Nursing conducts a practice analysis to verify the activities that are tested on the licensure exam (NCLEX-RN). Faculty can benefit from information in the practice analysis to ensure that courses and experiences adequately prepare graduates for the NCLEX-RN. This summary of the practice analysis challenges common assumptions and provides recommendations for faculty.

  4. The sufficiency assumption of the reasoned approach to action

    OpenAIRE

    David Trafimow

    2015-01-01

    The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables accou...

  5. A computational model to investigate assumptions in the headturn preference procedure

    Directory of Open Access Journals (Sweden)

    Christina eBergmann

    2013-10-01

    Full Text Available In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP: (1 behavioural differences originate in different processing; (2 processing involves some form of recognition; (3 words are segmented from connected speech; and (4 differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a the specific voices used in the two parts on HPP experiments (familiarisation and test and (b the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximise cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviours observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP.

  6. Some Considerations on the Basic Assumptions in Rotordynamics

    Science.gov (United States)

    GENTA, G.; DELPRETE, C.; BRUSA, E.

    1999-10-01

    The dynamic study of rotors is usually performed under a number of assumptions, namely small displacements and rotations, small unbalance and constant angular velocity. The latter assumption can be substituted by a known time history of the spin speed. The present paper develops a general non-linear model which can be used to study the rotordynamic behaviour of both fixed and free rotors without resorting to the mentioned assumptions and compares the results obtained from a number of non-linear numerical simulations with those computed through the usual linearized approach. It is so possible to verify that the validity of the rotordynamic models extends to situations in which fairly large unbalances and whirling motions are present and, above all, it is shown that the doubts forwarded about the application of a model which is based on constant spin speed to the case of free rotors in which the angular momentum is constant have no ground. Rotordynamic models can thus be used to study the stability in the small of spinning spacecrafts and the insight obtained from the study of rotors is useful to understand their attitude dynamics and its interactions with the vibration dynamics.

  7. Krashen's Input Hypothesis and Affective Filter Hypothesis ’Enlighten-ment to the Vocational English Teaching

    Institute of Scientific and Technical Information of China (English)

    LI Chun-xia

    2013-01-01

    The Krashen's second language acquisition theory have two important assumptions:input hypothesis and the affective filter hypothesis. They can guide and inspire vocational teaching, because they start from the student's learning situation, guide teachers to adjust their teaching methods, and provide better service for teaching and learning. This paper mainly analyzes the Krashen's input hypothesis and the affective filter hypothesis’enlightenment to the vocational English teaching.

  8. Robust Output Feedback Control for a Class of Nonlinear Systems with Input Unmodeled Dynamics

    Institute of Scientific and Technical Information of China (English)

    Ming-Zhe Hou; Ai-Guo Wu; Guang-Ren Dua

    2008-01-01

    The robust global stabilization problem of a class of uncertain nonlinear systems with input unmodeled dynamics is considered using output feedback, where the uncertain nonlinear terms satisfy a far more relaxed condition than the existing triangular- type condition. Under the assumption that the input unmodeled dynamics is minimum-phase and of relative degree zero, a dynamic output compensator is explicitly constructed based on the nonseparation principle. An example illustrates the usefulness of the proposed method.

  9. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  10. On Adaptive Optimal Input Design

    NARCIS (Netherlands)

    Stigter, J.D.; Vries, D.; Keesman, K.J.

    2003-01-01

    The problem of optimal input design (OID) for a fed-batch bioreactor case study is solved recursively. Here an adaptive receding horizon optimal control problem, involving the so-called E-criterion, is solved on-line, using the current estimate of the parameter vector at each sample instant {tk, k =

  11. World Input-Output Network

    Science.gov (United States)

    Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo

    2015-01-01

    Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries. PMID:26222389

  12. Input in an Institutional Setting.

    Science.gov (United States)

    Bardovi-Harlig, Kathleen; Hartford, Beverly S.

    1996-01-01

    Investigates the nature of input available to learners in the institutional setting of the academic advising session. Results indicate that evidence for the realization of speech acts, positive evidence from peers and status unequals, the effect of stereotypes, and limitations of a learner's pragmatic and grammatical competence are influential…

  13. Optimal Inputs for System Identification.

    Science.gov (United States)

    1995-09-01

    The derivation of the power spectral density of the optimal input for system identification is addressed in this research. Optimality is defined in...identification potential of general System Identification algorithms, a new and efficient System Identification algorithm that employs Iterated Weighted Least

  14. Analog Input Data Acquisition Software

    Science.gov (United States)

    Arens, Ellen

    2009-01-01

    DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.

  15. Remote input/output station

    CERN Multimedia

    1972-01-01

    A general view of the remote input/output station installed in building 112 (ISR) and used for submitting jobs to the CDC 6500 and 6600. The card reader on the left and the line printer on the right are operated by programmers on a self-service basis.

  16. World Input-Output Network.

    Science.gov (United States)

    Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo

    2015-01-01

    Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  17. Input/output interface module

    Science.gov (United States)

    Ozyazici, E. M.

    1980-01-01

    Module detects level changes in any of its 16 inputs, transfers changes to its outputs, and generates interrupts when changes are detected. Up to four changes-in-state per line are stored for later retrieval by controlling computer. Using standard TTL logic, module fits 19-inch rack-mounted console.

  18. The advanced LIGO input optics

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Chris L., E-mail: cmueller@phys.ufl.edu; Arain, Muzammil A.; Ciani, Giacomo; Feldbaum, David; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Martin, Rodica M.; Reitze, David H.; Tanner, David B.; Williams, Luke F.; Mueller, Guido [University of Florida, Gainesville, Florida 32611 (United States); DeRosa, Ryan T.; Effler, Anamaria; Kokeyama, Keiko [Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Frolov, Valery V.; Mullavey, Adam [LIGO Livingston Observatory, Livingston, Louisiana 70754 (United States); Kawabe, Keita; Vorvick, Cheryl [LIGO Hanford Observatory, Richland, Washington 99352 (United States); King, Eleanor J. [University of Adelaide, Adelaide, SA 5005 (Australia); and others

    2016-01-15

    The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design.

  19. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  20. Evaluating risk factor assumptions: a simulation-based approach

    Directory of Open Access Journals (Sweden)

    Miglioretti Diana L

    2011-09-01

    Full Text Available Abstract Background Microsimulation models are an important tool for estimating the comparative effectiveness of interventions through prediction of individual-level disease outcomes for a hypothetical population. To estimate the effectiveness of interventions targeted toward high risk groups, the mechanism by which risk factors influence the natural history of disease must be specified. We propose a method for evaluating these risk factor assumptions as part of model-building. Methods We used simulation studies to examine the impact of risk factor assumptions on the relative rate (RR of colorectal cancer (CRC incidence and mortality for a cohort with a risk factor compared to a cohort without the risk factor using an extension of the CRC-SPIN model for colorectal cancer. We also compared the impact of changing age at initiation of screening colonoscopy for different risk mechanisms. Results Across CRC-specific risk factor mechanisms, the RR of CRC incidence and mortality decreased (towards one with increasing age. The rate of change in RRs across age groups depended on both the risk factor mechanism and the strength of the risk factor effect. Increased non-CRC mortality attenuated the effect of CRC-specific risk factors on the RR of CRC when both were present. For each risk factor mechanism, earlier initiation of screening resulted in more life years gained, though the magnitude of life years gained varied across risk mechanisms. Conclusions Simulation studies can provide insight into both the effect of risk factor assumptions on model predictions and the type of data needed to calibrate risk factor models.

  1. THE COMPLEX OF ASSUMPTION CATHEDRAL OF THE ASTRAKHAN KREMLIN

    Directory of Open Access Journals (Sweden)

    Savenkova Aleksandra Igorevna

    2016-08-01

    Full Text Available This article is devoted to an architectural and historical analysis of the constructions forming a complex of Assumption Cathedral of the Astrakhan Kremlin, which earlier hasn’t been considered as a subject of special research. Basing on the archival sources, photographic materials, publications and on-site investigations of monuments, the creation history of the complete architectural complex sustained in one style of the Muscovite baroque, unique in its composite construction, is considered. Its interpretation in the all-Russian architectural context is offered. Typological features of single constructions come to light. The typology of the Prechistinsky bell tower has an untypical architectural solution - “hexagonal structure on octagonal and quadrangular structures”. The way of connecting the building of the Cathedral and the chambers by the passage was characteristic of monastic constructions and was exclusively seldom in kremlins, farmsteads and ensembles of city cathedrals. The composite scheme of the Assumption Cathedral includes the Lobnoye Mesto (“the Place of Execution” located on an axis from the West, it is connected with the main building by a quarter-turn with landing. The only prototype of the structure is a Lobnoye Mesto on the Red Square in Moscow. In the article the version about the emergence of the Place of Execution on the basis of earlier existing construction - a tower “the Peal” which is repeatedly mentioned in written sources in connection with S. Razin’s revolt is considered. The metropolitan Sampson, trying to keep the value of the Astrakhan metropolitanate, builds the Assumption Cathedral and the Place of Execution directly appealing to a capital prototype to emphasize the continuity and close connection with Moscow.

  2. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG......) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...

  3. AN EFFICIENT BIT COMMITMENT SCHEME BASED ON FACTORING ASSUMPTION

    Institute of Scientific and Technical Information of China (English)

    Zhong Ming; Yang Yixian

    2001-01-01

    Recently, many bit commitment schemes have been presented. This paper presents a new practical bit commitment scheme based on Schnorr's one-time knowledge proof scheme,where the use of cut-and-choose method and many random exam candidates in the protocols are replaced by a single challenge number. Therefore the proposed bit commitment scheme is more efficient and practical than the previous schemes In addition, the security of the proposed scheme under factoring assumption is proved, thus the cryptographic basis of the proposed scheme is clarified.

  4. Radiation hormesis and the linear-no-threshold assumption

    CERN Document Server

    Sanders, Charles L

    2009-01-01

    Current radiation protection standards are based upon the application of the linear no-threshold (LNT) assumption, which considers that even very low doses of ionizing radiation can cause cancer. The radiation hormesis hypothesis, by contrast, proposes that low-dose ionizing radiation is beneficial. In this book, the author examines all facets of radiation hormesis in detail, including the history of the concept and mechanisms, and presents comprehensive, up-to-date reviews for major cancer types. It is explained how low-dose radiation can in fact decrease all-cause and all-cancer mortality an

  5. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    articles across various research disciplines. We find and classify a stock of 107 relevant articles into four scientific discourses: the normative, the interpretive, the critical, and the dialogical discourses, as formulated by Deetz (1996). We find that the normative discourse dominates the IT PPM......, metaphors, information systems....... literature, and few contributions represent the three remaining discourses, which unjustifiably leaves out issues that research could and most probably should investigate. In order to highlight research potentials, limitations, and underlying assumptions of each discourse, we develop four IT PPM metaphors...

  6. Robust predictive control of uncertain intergrating linear systems with input constraints

    Institute of Scientific and Technical Information of China (English)

    张良军; 李江; 宋执环; 李平

    2002-01-01

    This paper presents a two-stage robust model predictive control (RMPC) algorithm named as IRMPC for uncertain linear integrating plants described by a state-space model with input constraints. The global convergence of the result e d closed loop system is guaranteed under mild assumption. The simulation example shows its validity and better performance than conventional Min-Max RMPC strat egies.

  7. Correcting partial volume artifacts of the arterial input function in quantitative cerebral perfusion MRI

    NARCIS (Netherlands)

    van Osch, MJP; Vonken, EJPA; Bakker, CJG; Viergever, MA

    2001-01-01

    To quantify cerebral perfusion with dynamic susceptibility contrast MRI (DSC-MRI), one needs to measure the arterial input function (AIF). Conventionally, one derives the contrast concentration from the DSC sequence by monitoring changes in either the amplitude or the phase signal on the assumption

  8. Dual-Modality Input in Repeated Reading for Foreign Language Learners with Different Learning Styles

    Science.gov (United States)

    Liu, Yeu-Ting; Todd, Andrew Graeme

    2014-01-01

    Research into dual-modality theory has long rested on the assumption that presenting input in two modalities leads to better learning outcomes. However, this may not always hold true. This study explored the possible advantages of using dual modality in repeated reading--a pedagogy often used to enhance reading development--for two literacy…

  9. Dual-Modality Input in Repeated Reading for Foreign Language Learners with Different Learning Styles

    Science.gov (United States)

    Liu, Yeu-Ting; Todd, Andrew Graeme

    2014-01-01

    Research into dual-modality theory has long rested on the assumption that presenting input in two modalities leads to better learning outcomes. However, this may not always hold true. This study explored the possible advantages of using dual modality in repeated reading--a pedagogy often used to enhance reading development--for two literacy…

  10. Study of Chunks Input Approach

    Institute of Scientific and Technical Information of China (English)

    马静

    2003-01-01

    This paper is to describe and investigate Chunks (Lexical Phrases ) Input Approach in vocabulary learning strategies by means of achievement tests,questionnaire surveys and interviews. The study is to reveal how different learners combine different vocabulary learning strategies in their learning process. With the data collected, the author of this paper discusses and summarizes learners' individual differences in selecting vocabulary learning strategies with the hope of giving new insights into English teaching and learning.

  11. 动态投入产出系统的稳定性分析%STABILITY ANALYSIS OF THE DYNAMIC INPUT-OUTPUT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    郭崇慧; 唐焕文

    2002-01-01

    The dynamic input-output model is well known in economic theory and practice.In this paper,the asymptotic stability and balanced growth solutions of the dynamic input-output system are considered.Under some natural assumptions which do not require the technical coefficient matrix to be indecomposable,it has been proved that the dynamic input-output system is not asymptotically stable and the closed dynamic input-output model has a balanced growth solution.

  12. Does the Distribution of Efficiency Scores Depend on the Input Mix?

    DEFF Research Database (Denmark)

    Asmild, Mette; Leth Hougaard, Jens; Kronborg, Dorte

    ) suiting the DEA methodology has been formulated and some asymptotic properties of the DEA estimators have been established. In line with this generally accepted DGP, we formulate a conditional test for the assumption of mix independence. Since the DEA frontier is estimated, many standardl assumptions...... for evaluating the test statistic are violated. Therefore, we propose to explore its statistical properties by the use of simulation studies. The simulations are performed conditional on the observed input mixes. The method, as shown here, is applicable for models with multiple inputs and one output...... with constant returns to scale when comparing distributions of efficiency scores in two or more groups. The approach is illustrated in an empirical case of demolition projects where we reject the assumption of mix independence. This means that it is not meaningful to perform a complete ranking of the projects...

  13. Halo-Independent Direct Detection Analyses Without Mass Assumptions

    CERN Document Server

    Anderson, Adam J; Kahn, Yonatan; McCullough, Matthew

    2015-01-01

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the $m_\\chi-\\sigma_n$ plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the $v_{min}-\\tilde{g}$ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from $v_{min}$ to nuclear recoil momentum ($p_R$), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call $\\tilde{h}(p_R)$. The entire family of conventional halo-independent $\\tilde{g}(v_{min})$ plots for all DM masses are directly found from the single $\\tilde{h}(p_R)$ plot through a simple re...

  14. The contour method cutting assumption: error minimization and correction

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  15. On the role of assumptions in cladistic biogeographical analyses

    Directory of Open Access Journals (Sweden)

    Charles Morphy Dias dos Santos

    2011-01-01

    Full Text Available The biogeographical Assumptions 0, 1, and 2 (respectively A0, A1 and A2 are theoretical terms used to interpret and resolve incongruence in order to find general areagrams. The aim of this paper is to suggest the use of A2 instead of A0 and A1 in solving uncertainties during cladistic biogeographical analyses. In a theoretical example, using Component Analysis and Primary Brooks Parsimony Analysis (primary BPA, A2 allows for the reconstruction of the true sequence of disjunction events within a hypothetical scenario, while A0 adds spurious area relationships. A0, A1 and A2 are interpretations of the relationships between areas, not between taxa. Since area relationships are not equivalent to cladistic relationships, it is inappropriate to use the distributional information of taxa to resolve ambiguous patterns in areagrams, as A0 does. Although ambiguity in areagrams is virtually impossible to explain, A2 is better and more neutral than any other biogeographical assumption.

  16. Economic Growth Assumptions in Climate and Energy Policy

    Directory of Open Access Journals (Sweden)

    Nir Y. Krakauer

    2014-03-01

    Full Text Available The assumption that the economic growth seen in recent decades will continue has dominated the discussion of future greenhouse gas emissions and the mitigation of and adaptation to climate change. Given that long-term economic growth is uncertain, the impacts of a wide range of growth trajectories should be considered. In particular, slower economic growth would imply that future generations will be relatively less able to invest in emissions controls or adapt to the detrimental impacts of climate change. Taking into consideration the possibility of economic slowdown therefore heightens the urgency of reducing greenhouse gas emissions now by moving to renewable energy sources, even if this incurs short-term economic cost. I quantify this counterintuitive impact of economic growth assumptions on present-day policy decisions in a simple global economy-climate model (Dynamic Integrated model of Climate and the Economy (DICE. In DICE, slow future growth increases the economically optimal present-day carbon tax rate and the utility of taxing carbon emissions, although the magnitude of the increase is sensitive to model parameters, including the rate of social time preference and the elasticity of the marginal utility of consumption. Future scenario development should specifically include low-growth scenarios, and the possibility of low-growth economic trajectories should be taken into account in climate policy analyses.

  17. The extended evolutionary synthesis: its structure, assumptions and predictions

    Science.gov (United States)

    Laland, Kevin N.; Uller, Tobias; Feldman, Marcus W.; Sterelny, Kim; Müller, Gerd B.; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-01-01

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the ‘extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism–environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. PMID:26246559

  18. Time derivatives of the spectrum: Relaxing the stationarity assumption

    Science.gov (United States)

    Prieto, G. A.; Thomson, D. J.; Vernon, F. L.

    2005-12-01

    Spectrum analysis of seismic waveforms has played a significant role towards the understanding of multiple aspects of Earth structure and earthquake source physics. In recent years the multitaper spectrum estimation approach (Thomson, 1982) has been applied to geophysical problems providing not only reliable estimates of the spectrum, but also estimates of spectral uncertainties (Thomson and Chave, 1991). However, these improved spectral estimates were developed under the assumption of local stationarity and provide an incomplete description of the observed process. It is obvious that due to the intrinsic attenuation of the Earth, the amplitudes, and thus the frequency contents are changing with time as waves pass through a seismic station. There have been incredible improvements in different techniques to analyze non-stationary signals, including wavelet decomposition, Wigner-Ville spectrum and the dual-frequency spectrum. We apply one of the recently developed techniques, the Quadratic Inverse Theory (Thomson, 1990, 1994), combined with the multitaper technique to look at the time derivatives of the spectrum. If the spectrum is reasonably white in a certain bandwidth, using QI theory, we can estimate the derivatives of the spectrum at each frequency. We test synthetic signals to corroborate the approach and apply it the records of small earthquakes at local distances. This is a first approach to try and combine the classical spectrum analysis without the assumption of stationarity that is generally taken.

  19. Relaxing the zero-sum assumption in neutral biodiversity theory.

    Science.gov (United States)

    Haegeman, Bart; Etienne, Rampal S

    2008-05-21

    The zero-sum assumption is one of the ingredients of the standard neutral model of biodiversity by Hubbell. It states that the community is saturated all the time, which in this model means that the total number of individuals in the community is constant over time, and therefore introduces a coupling between species abundances. It was shown recently that a neutral model with independent species, and thus without any coupling between species abundances, has the same sampling formula (given a fixed number of individuals in the sample) as the standard model [Etienne, R.S., Alonso, D., McKane, A.J., 2007. The zero-sum assumption in neutral biodiversity theory. J. Theor. Biol. 248, 522-536]. The equilibria of both models are therefore equivalent from a practical point of view. Here we show that this equivalence can be extended to a class of neutral models with density-dependence on the community-level. This result can be interpreted as robustness of the model, i.e. insensitivity of the model to the precise interaction of the species in a neutral community. It can also be interpreted as a lack of resolution, as different mechanisms of interactions between neutral species cannot be distinguished using only a single snapshot of species abundance data.

  20. What lies beneath: underlying assumptions in bioimage analysis.

    Science.gov (United States)

    Pridmore, Tony P; French, Andrew P; Pound, Michael P

    2012-12-01

    The need for plant image analysis tools is established and has led to a steadily expanding literature and set of software tools. This is encouraging, but raises a question: how does a plant scientist with no detailed knowledge or experience of image analysis methods choose the right tool(s) for the task at hand, or satisfy themselves that a suggested approach is appropriate? We believe that too great an emphasis is currently being placed on low-level mechanisms and software environments. In this opinion article we propose that a renewed focus on the core theories and algorithms used, and in particular the assumptions upon which they rely, will better equip plant scientists to evaluate the available resources. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Assumptions of Customer Knowledge Enablement in the Open Innovation Process

    Directory of Open Access Journals (Sweden)

    Jokubauskienė Raminta

    2017-08-01

    Full Text Available In the scientific literature, open innovation is one of the most effective means to innovate and gain a competitive advantage. In practice, there is a variety of open innovation activities, but, nevertheless, customers stand as the cornerstone in this area, since the customers’ knowledge is one of the most important sources of new knowledge and ideas. Evaluating the context where are the interactions of open innovation and customer knowledge enablement, it is necessary to take into account the importance of customer knowledge management. Increasingly it is highlighted that customers’ knowledge management facilitates the creation of innovations. However, it should be an examination of other factors that influence the open innovation, and, at the same time, customers’ knowledge management. This article presents a theoretical model, which reveals the assumptions of open innovation process and the impact on the firm’s performance.

  2. Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David

    2002-02-14

    Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model.

  3. Decision-Theoretic Planning: Structural Assumptions and Computational Leverage

    CERN Document Server

    Boutilier, C; Hanks, S; 10.1613/jair.575

    2011-01-01

    Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the...

  4. Validating modelling assumptions of alpha particles in electrostatic turbulence

    CERN Document Server

    Wilkie, George; Highcock, Edmund; Dorland, William

    2014-01-01

    To rigorously model fast ions in fusion plasmas, a non-Maxwellian equilibrium distribution must be used. In the work, the response of high-energy alpha particles to electrostatic turbulence has been analyzed for several different tokamak parameters. Our results are consistent with known scalings and experimental evidence that alpha particles are generally well-confined: on the order of several seconds. It is also confirmed that the effect of alphas on the turbulence is negligible at realistically low concentrations, consistent with linear theory. It is demonstrated that the usual practice of using a high-temperature Maxwellian gives incorrect estimates for the radial alpha particle flux, and a method of correcting it is provided. Furthermore, we see that the timescales associated with collisions and transport compete at moderate energies, calling into question the assumption that alpha particles remain confined to a flux surface that is used in the derivation of the slowing-down distribution.

  5. Exploring gravitational statistics not based on quantum dynamical assumptions

    CERN Document Server

    Mandrin, P A

    2016-01-01

    Despite considerable progress in several approaches to quantum gravity, there remain uncertainties on the conceptual level. One issue concerns the different roles played by space and time in the canonical quantum formalism. This issue occurs because the Hamilton-Jacobi dynamics is being quantised. The question then arises whether additional physically relevant states could exist which cannot be represented in the canonical form or as a partition function. For this reason, the author has explored a statistical approach (NDA) which is not based on quantum dynamical assumptions and does not require space-time splitting boundary conditions either. For dimension 3+1 and under thermal equilibrium, NDA simplifies to a path integral model. However, the general case of NDA cannot be written as a partition function. As a test of NDA, one recovers general relativity at low curvature and quantum field theory in the flat space-time approximation. Related paper: arxiv:1505.03719.

  6. Uncovering Metaethical Assumptions in Bioethical Discourse across Cultures.

    Science.gov (United States)

    Sullivan, Laura Specker

    2016-03-01

    Much of bioethical discourse now takes place across cultures. This does not mean that cross-cultural understanding has increased. Many cross-cultural bioethical discussions are marked by entrenched disagreement about whether and why local practices are justified. In this paper, I argue that a major reason for these entrenched disagreements is that problematic metaethical commitments are hidden in these cross-cultural discourses. Using the issue of informed consent in East Asia as an example of one such discourse, I analyze two representative positions in the discussion and identify their metaethical commitments. I suggest that the metaethical assumptions of these positions result from their shared method of ethical justification: moral principlism. I then show why moral principlism is problematic in cross-cultural analyses and propose a more useful method for pursuing ethical justification across cultures.

  7. Posttraumatic Growth and Shattered World Assumptions Among Ex-POWs

    DEFF Research Database (Denmark)

    Lahav, Y.; Bellin, Elisheva S.; Solomon, Z.

    2016-01-01

    PTG and WAs. Method: Former prisoners of war (ex-POWs; n = 158) and comparable controls (n = 106) were assessed 38 years after the Yom Kippur War. Results: Ex-POWs endorsed more negative WAs and higher PTG and dissociation compared to controls. Ex-POWs with posttraumatic stress disorder (PTSD...... world assumptions (WAs) and that the co-occurrence of high PTG and negative WAs among trauma survivors reflects reconstruction of an integrative belief system. The present study aimed to test these claims by investigating, for the first time, the mediating role of dissociation in the relation between......) endorsed negative WAs and a higher magnitude of PTG and dissociation, compared to both ex-POWs without PTSD and controls. WAs were negatively correlated with dissociation and positively correlated with PTG. PTG was positively correlated with dissociation. Moreover, dissociation fully mediated...

  8. Linear irreversible heat engines based on local equilibrium assumptions

    Science.gov (United States)

    Izumida, Yuki; Okuda, Koji

    2015-08-01

    We formulate an endoreversible finite-time Carnot cycle model based on the assumptions of local equilibrium and constant energy flux, where the efficiency and the power are expressed in terms of the thermodynamic variables of the working substance. By analyzing the entropy production rate caused by the heat transfer in each isothermal process during the cycle, and using the endoreversible condition applied to the linear response regime, we identify the thermodynamic flux and force of the present system and obtain a linear relation that connects them. We calculate the efficiency at maximum power in the linear response regime by using the linear relation, which agrees with the Curzon-Ahlborn (CA) efficiency known as the upper bound in this regime. This reason is also elucidated by rewriting our model into the form of the Onsager relations, where our model turns out to satisfy the tight-coupling condition leading to the CA efficiency.

  9. New media in strategy – mapping assumptions in the field

    DEFF Research Database (Denmark)

    Gulbrandsen, Ib Tunby; Plesner, Ursula; Raviola, Elena

    2017-01-01

    in relation to the outside or the inside of the organization. After discussing the literature according to these dimensions (deterministic/volontaristic) and (internal/external), the article argues for a sociomaterial approach to strategy and strategy making and for using the concept of affordances......There is plenty of empirical evidence for claiming that new media make a difference for how strategy is conceived and executed. Furthermore, there is a rapidly growing body of literature that engages with this theme, and offers recommendations regarding the appropriate strategic actions in relation...... to new media. By contrast, there is relatively little attention to the assumptions behind strategic thinking in relation to new media. This article reviews the most influential strategy journals, asking how new media are conceptualized. It is shown that strategy scholars have a tendency to place...

  10. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  11. Statistical Tests of the PTHA Poisson Assumption for Submarine Landslides

    Science.gov (United States)

    Geist, E. L.; Chaytor, J. D.; Parsons, T.; Ten Brink, U. S.

    2012-12-01

    We demonstrate that a sequence of dated mass transport deposits (MTDs) can provide information to statistically test whether or not submarine landslides associated with these deposits conform to a Poisson model of occurrence. Probabilistic tsunami hazard analysis (PTHA) most often assumes Poissonian occurrence for all sources, with an exponential distribution of return times. Using dates that define the bounds of individual MTDs, we first describe likelihood and Monte Carlo methods of parameter estimation for a suite of candidate occurrence models (Poisson, lognormal, gamma, Brownian Passage Time). In addition to age-dating uncertainty, both methods incorporate uncertainty caused by the open time intervals: i.e., before the first and after the last event to the present. Accounting for these open intervals is critical when there are a small number of observed events. The optimal occurrence model is selected according to both the Akaike Information Criteria (AIC) and Akaike's Bayesian Information Criterion (ABIC). In addition, the likelihood ratio test can be performed on occurrence models from the same family: e.g., the gamma model relative to the exponential model of return time distribution. Parameter estimation, model selection, and hypothesis testing are performed on data from two IODP holes in the northern Gulf of Mexico that penetrated a total of 14 MTDs, some of which are correlated between the two holes. Each of these events has been assigned an age based on microfossil zonations and magnetostratigraphic datums. Results from these sites indicate that the Poisson assumption is likely valid. However, parameter estimation results using the likelihood method for one of the sites suggest that the events may have occurred quasi-periodically. Methods developed in this study provide tools with which one can determine both the rate of occurrence and the statistical validity of the Poisson assumption when submarine landslides are included in PTHA.

  12. Power spectra of the natural input to the visual system.

    Science.gov (United States)

    Pamplona, D; Triesch, J; Rothkopf, C A

    2013-05-03

    The efficient coding hypothesis posits that sensory systems are adapted to the regularities of their signal input so as to reduce redundancy in the resulting representations. It is therefore important to characterize the regularities of natural signals to gain insight into the processing of natural stimuli. While measurements of statistical regularity in vision have focused on photographic images of natural environments it has been much less investigated, how the specific imaging process embodied by the organism's eye induces statistical dependencies on the natural input to the visual system. This has allowed using the convenient assumption that natural image data are homogeneous across the visual field. Here we give up on this assumption and show how the imaging process in a human model eye influences the local statistics of the natural input to the visual system across the entire visual field. Artificial scenes with three-dimensional edge elements were generated and the influence of the imaging projection onto the back of a spherical model eye were quantified. These distributions show a strong radial influence of the imaging process on the resulting edge statistics with increasing eccentricity from the model fovea. This influence is further quantified through computation of the second order intensity statistics as a function of eccentricity from the center of projection using samples from the dead leaves image model. Using data from a naturalistic virtual environment, which allows generation of correctly projected images onto the model eye across the entire field of view, we quantified the second order dependencies as function of the position in the visual field using a new generalized parameterization of the power spectra. Finally, we compared this analysis with a commonly used natural image database, the van Hateren database, and show good agreement within the small field of view available in these photographic images. We conclude by providing a detailed

  13. Weak convergence of Jacobian determinants under asymmetric assumptions

    Directory of Open Access Journals (Sweden)

    Teresa Alberico

    2012-05-01

    Full Text Available Let $\\Om$ be a bounded open set in $\\R^2$ sufficiently smooth and $f_k=(u_k,v_k$ and $f=(u,v$ mappings belong to the Sobolev space $W^{1,2}(\\Om,\\R^2$. We prove that if the sequence of Jacobians $J_{f_k}$ converges to a measure $\\mu$ in sense of measures andif one allows different assumptions on the two components of $f_k$ and $f$, e.g.$$u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,2}(\\Om \\qquad \\, v_k \\rightharpoonup v \\;\\;\\mbox{weakly in} \\;\\; W^{1,q}(\\Om$$for some $q\\in(1,2$, then\\begin{equation}\\label{0}d\\mu=J_f\\,dz.\\end{equation}Moreover, we show that this result is optimal in the sense that conclusion fails for $q=1$.On the other hand, we prove that \\eqref{0} remains valid also if one considers the case $q=1$, but it is necessary to require that $u_k$ weakly converges to $u$ in a Zygmund-Sobolev space with a slightly higher degree of regularity than $W^{1,2}(\\Om$ and precisely$$ u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,L^2 \\log^\\alpha L}(\\Om$$for some $\\alpha >1$.    

  14. On Some Unwarranted Tacit Assumptions in Cognitive Neuroscience†

    Science.gov (United States)

    Mausfeld, Rainer

    2011-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input–output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings. PMID:22435062

  15. In vitro versus in vivo culture sensitivities: an unchecked assumption?

    Directory of Open Access Journals (Sweden)

    Prasad V

    2013-03-01

    Full Text Available No abstract available. Article truncated at 150 words. Case Presentation A patient presents to urgent care with the symptoms of a urinary tract infection (UTI. The urinalysis is consistent with infection, and the urine culture is sent to lab. In the interim, a physician prescribes empiric treatment, and sends the patient home. Two days later, the culture is positive for E. coli, resistant to the drug prescribed (Ciprofloxacin, Minimum Inhibitory Concentration (MIC 64 μg/ml, but attempts to contact the patient (by telephone are not successful. The patient returns the call two weeks later to say that the infection resolved without sequelae.Discussion Many clinicians have the experience of treatment success in the setting of known antibiotic resistance, and, conversely, treatment failure in the setting of known sensitivity. Such anomalies and empiric research described here forces us to revisit assumptions about the relationship between in vivo and in vitro drug responses. When it comes to the utility of microbiology…

  16. Finite Element Simulations to Explore Assumptions in Kolsky Bar Experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Crum, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-05

    The chief purpose of this project has been to develop a set of finite element models that attempt to explore some of the assumptions in the experimental set-up and data reduction of the Kolsky bar experiment. In brief, the Kolsky bar, sometimes referred to as the split Hopkinson pressure bar, is an experimental apparatus used to study the mechanical properties of materials at high strain rates. Kolsky bars can be constructed to conduct experiments in tension or compression, both of which are studied in this paper. The basic operation of the tension Kolsky bar is as follows: compressed air is inserted into the barrel that contains the striker; the striker accelerates towards the left and strikes the left end of the barrel producing a tensile stress wave that propogates first through the barrel and then down the incident bar, into the specimen, and finally the transmission bar. In the compression case, the striker instead travels to the right and impacts the incident bar directly. As the stress wave travels through an interface (e.g., the incident bar to specimen connection), a portion of the pulse is transmitted and the rest reflected. The incident pulse, as well as the transmitted and reflected pulses are picked up by two strain gauges installed on the incident and transmitted bars as shown. By interpreting the data acquired by these strain gauges, the stress/strain behavior of the specimen can be determined.

  17. Cleanup of contaminated soil -- Unreal risk assumptions: Contaminant degradation

    Energy Technology Data Exchange (ETDEWEB)

    Schiffman, A. [New Jersey Department of Environmental Protection, Ewing, NJ (United States)

    1995-12-31

    Exposure assessments for development of risk-based soil cleanup standards or criteria assume that contaminant mass in soil is infinite and conservative (constant concentration). This assumption is not real for most organic chemicals. Contaminant mass is lost from soil and ground water when organic chemicals degrade. Factors to correct for chemical mass lost by degradation are derived from first-order kinetics for 85 organic chemicals commonly listed by USEPA and state agencies. Soil cleanup criteria, based on constant concentration, are then corrected for contaminant mass lost. For many chemicals, accounting for mass lost yields large correction factors to risk-based soil concentrations. For degradation in ground water and soil, correction factors range from greater than one to several orders of magnitude. The long exposure durations normally used in exposure assessments (25 to 70 years) result in large correction factors to standards even for carcinogenic chemicals with long half-lives. For the ground water pathway, a typical soil criterion for TCE of 1 mg/kg would be corrected to 11 mg/kg. For noncarcinogens, correcting for mass lost means that risk algorithms used to set soil cleanup requirements are inapplicable for many chemicals, especially for long periods of exposure.

  18. Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.

    Science.gov (United States)

    Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A

    2017-04-01

    The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. PKreport: report generation for checking population pharmacokinetic model assumptions

    Directory of Open Access Journals (Sweden)

    Li Jun

    2011-05-01

    Full Text Available Abstract Background Graphics play an important and unique role in population pharmacokinetic (PopPK model building by exploring hidden structure among data before modeling, evaluating model fit, and validating results after modeling. Results The work described in this paper is about a new R package called PKreport, which is able to generate a collection of plots and statistics for testing model assumptions, visualizing data and diagnosing models. The metric system is utilized as the currency for communicating between data sets and the package to generate special-purpose plots. It provides ways to match output from diverse software such as NONMEM, Monolix, R nlme package, etc. The package is implemented with S4 class hierarchy, and offers an efficient way to access the output from NONMEM 7. The final reports take advantage of the web browser as user interface to manage and visualize plots. Conclusions PKreport provides 1 a flexible and efficient R class to store and retrieve NONMEM 7 output, 2 automate plots for users to visualize data and models, 3 automatically generated R scripts that are used to create the plots; 4 an archive-oriented management tool for users to store, retrieve and modify figures, 5 high-quality graphs based on the R packages, lattice and ggplot2. The general architecture, running environment and statistical methods can be readily extended with R class hierarchy. PKreport is free to download at http://cran.r-project.org/web/packages/PKreport/index.html.

  20. Testing the habituation assumption underlying models of parasitoid foraging behavior

    Science.gov (United States)

    Abram, Katrina; Colazza, Stefano; Peri, Ezio

    2017-01-01

    Background Habituation, a form of non-associative learning, has several well-defined characteristics that apply to a wide range of physiological and behavioral responses in many organisms. In classic patch time allocation models, habituation is considered to be a major mechanistic component of parasitoid behavioral strategies. However, parasitoid behavioral responses to host cues have not previously been tested for the known, specific characteristics of habituation. Methods In the laboratory, we tested whether the foraging behavior of the egg parasitoid Trissolcus basalis shows specific characteristics of habituation in response to consecutive encounters with patches of host (Nezara viridula) chemical contact cues (footprints), in particular: (i) a training interval-dependent decline in response intensity, and (ii) a training interval-dependent recovery of the response. Results As would be expected of a habituated response, wasps trained at higher frequencies decreased their behavioral response to host footprints more quickly and to a greater degree than those trained at low frequencies, and subsequently showed a more rapid, although partial, recovery of their behavioral response to host footprints. This putative habituation learning could not be blocked by cold anesthesia, ingestion of an ATPase inhibitor, or ingestion of a protein synthesis inhibitor. Discussion Our study provides support for the assumption that diminishing responses of parasitoids to chemical indicators of host presence constitutes habituation as opposed to sensory fatigue, and provides a preliminary basis for exploring the underlying mechanisms. PMID:28321365

  1. Observing gravitational-wave transient GW150914 with minimal assumptions

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R. T.; De Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Haas, R.; Hacker, J. J.

    2016-06-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be sensitive to gravitational waves emitted by a wide range of sources including binary black hole mergers. Over the observational period from September 12 to October 20, 2015, these transient searches were sensitive to binary black hole mergers similar to GW150914 to an average distance of ˜600 Mpc . In this paper, we describe the analyses that first detected GW150914 as well as the parameter estimation and waveform reconstruction techniques that initially identified GW150914 as the merger of two black holes. We find that the reconstructed waveform is consistent with the signal from a binary black hole merger with a chirp mass of ˜30 M⊙ and a total mass before merger of ˜70 M⊙ in the detector frame.

  2. Randomness Amplification under Minimal Fundamental Assumptions on the Devices

    Science.gov (United States)

    Ramanathan, Ravishankar; Brandão, Fernando G. S. L.; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Wojewódka, Hanna

    2016-12-01

    Recently, the physically realistic protocol amplifying the randomness of Santha-Vazirani sources producing cryptographically secure random bits was proposed; however, for reasons of practical relevance, the crucial question remained open regarding whether this can be accomplished under the minimal conditions necessary for the task. Namely, is it possible to achieve randomness amplification using only two no-signaling components and in a situation where the violation of a Bell inequality only guarantees that some outcomes of the device for specific inputs exhibit randomness? Here, we solve this question and present a device-independent protocol for randomness amplification of Santha-Vazirani sources using a device consisting of two nonsignaling components. We show that the protocol can amplify any such source that is not fully deterministic into a fully random source while tolerating a constant noise rate and prove the composable security of the protocol against general no-signaling adversaries. Our main innovation is the proof that even the partial randomness certified by the two-party Bell test [a single input-output pair (u* , x* ) for which the conditional probability P (x*|u*) is bounded away from 1 for all no-signaling strategies that optimally violate the Bell inequality] can be used for amplification. We introduce the methodology of a partial tomographic procedure on the empirical statistics obtained in the Bell test that ensures that the outputs constitute a linear min-entropy source of randomness. As a technical novelty that may be of independent interest, we prove that the Santha-Vazirani source satisfies an exponential concentration property given by a recently discovered generalized Chernoff bound.

  3. Repositioning Recitation Input in College English Teaching

    Science.gov (United States)

    Xu, Qing

    2009-01-01

    This paper tries to discuss how recitation input helps overcome the negative influences on the basis of second language acquisition theory and confirms the important role that recitation input plays in improving college students' oral and written English.

  4. Validity of the assumption of Gaussian turbulence; Gyldighed af antagelsen om Gaussisk turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, M.; Hansen, K.S.; Juul Pedersen, B.

    2000-07-01

    Wind turbines are designed to withstand the impact of turbulent winds, which fluctuations usually are assumed of Gaussian probability distribution. Based on a large number of measurements from many sites, this seems a reasonable assumption in flat homogeneous terrain whereas it may fail in complex terrain. At these sites the wind speed often has a skew distribution with more frequent lulls than gusts. In order to simulate aerodynamic loads, a numerical turbulence simulation method was developed and implemented. This method may simulate multiple time series of variable not necessarily Gaussian distribution without distortion of the spectral distribution or spatial coherence. The simulated time series were used as input to the dynamic-response simulation program Vestas Turbine Simulator (VTS). In this way we simulated the dynamic response of systems exposed to turbulence of either Gaussian or extreme, yet realistic, non-Gaussian probability distribution. Certain loads on turbines with active pitch regulation were enhanced by up to 15% compared to pure Gaussian turbulence. It should, however, be said that the undesired effect depends on the dynamic system, and it might be mitigated by optimisation of the wind turbine regulation system after local turbulence characteristics. (au)

  5. Facilitating agricultural input distribution in Uganda - Experiences ...

    African Journals Online (AJOL)

    Mo

    The input supply market however, suffered a setback as a result of the ... Ltd. redefined the approach emphasizing a demand driven input market by shifting ... Training of business entrepreneurs in business planning, ... The strategy to increase rural demand for agricultural inputs ..... During season 2004A, the basic fertilizers.

  6. Effects of Auditory Input in Individuation Tasks

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2008-01-01

    Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…

  7. 7 CFR 3430.607 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION... § 3430.607 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety of...

  8. 7 CFR 3430.15 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.15 Section 3430.15... Stakeholder input. Section 103(c)(2) of the Agricultural Research, Extension, and Education Reform Act of 1998... RFAs for competitive programs. CSREES will provide instructions for submission of stakeholder input...

  9. 7 CFR 3430.907 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION... Program § 3430.907 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety...

  10. Projecting the future of Canada's population: assumptions, implications, and policy

    Directory of Open Access Journals (Sweden)

    Beaujot, Roderic

    2003-01-01

    Full Text Available After considering the assumptions for fertility, mortality and international migration, this paper looks at implications of the evolving demographics for population growth, labour force, retirement, and population distribution. With the help of policies favouring gender equity and supporting families of various types, fertility in Canada could avoid the particularly low levels seen in some countries, and remain at levels closer to 1.6 births per woman. The prognosis in terms of both risk factors and treatment suggests further reductions in mortality toward a life expectancy of 85. On immigration, there are political interests for levels as high as 270,000 per year, while levels of 150,000 correspond to the long term post-war average. The future will see slower population growth, and due to migration more than natural increase. International migration of some 225,000 per year can enable Canada to avoid population decline, and sustain the size of the labour force, but all scenarios show much change in the relative size of the retired compared to the labour force population. According to the ratio of persons aged 20-64 to that aged 65 and over, there were seven persons at labour force ages per person at retirement age in 1951, compared to five in 2001 and probably less than 2.5 in 2051. Growth that is due to migration more so than natural increase will accentuate the urbanization trend and the unevenness of the population distribution over space. Past projections have under-projected the mortality improvements and their impact on the relative size of the population at older age groups. Policies regarding fertility, mortality and migration could be aimed at avoiding population decline and reducing the effect of aging, but there is lack of an institutional basis for policy that would seek to endogenize population.

  11. Providing security assurance in line with national DBT assumptions

    Science.gov (United States)

    Bajramovic, Edita; Gupta, Deeksha

    2017-01-01

    As worldwide energy requirements are increasing simultaneously with climate change and energy security considerations, States are thinking about building nuclear power to fulfill their electricity requirements and decrease their dependence on carbon fuels. New nuclear power plants (NPPs) must have comprehensive cybersecurity measures integrated into their design, structure, and processes. In the absence of effective cybersecurity measures, the impact of nuclear security incidents can be severe. Some of the current nuclear facilities were not specifically designed and constructed to deal with the new threats, including targeted cyberattacks. Thus, newcomer countries must consider the Design Basis Threat (DBT) as one of the security fundamentals during design of physical and cyber protection systems of nuclear facilities. IAEA NSS 10 describes the DBT as "comprehensive description of the motivation, intentions and capabilities of potential adversaries against which protection systems are designed and evaluated". Nowadays, many threat actors, including hacktivists, insider threat, cyber criminals, state and non-state groups (terrorists) pose security risks to nuclear facilities. Threat assumptions are made on a national level. Consequently, threat assessment closely affects the design structures of nuclear facilities. Some of the recent security incidents e.g. Stuxnet worm (Advanced Persistent Threat) and theft of sensitive information in South Korea Nuclear Power Plant (Insider Threat) have shown that these attacks should be considered as the top threat to nuclear facilities. Therefore, the cybersecurity context is essential for secure and safe use of nuclear power. In addition, States should include multiple DBT scenarios in order to protect various target materials, types of facilities, and adversary objectives. Development of a comprehensive DBT is a precondition for the establishment and further improvement of domestic state nuclear-related regulations in the

  12. Input calibration for negative originals

    Science.gov (United States)

    Tuijn, Chris

    1995-04-01

    One of the major challenges in the prepress environment consists of controlling the electronic color reproduction process such that a perfect match of any original can be realized. Whether this goal can be reached depends on many factors such as the dynamic range of the input device (scanner, camera), the color gamut of the output device (dye sublimation printer, ink-jet printer, offset), the color management software etc. The characterization of the color behavior of the peripheral devices is therefore very important. Photographs and positive transparents reflect the original scene pretty well; for negative originals, however, there is no obvious link to either the original scene or a particular print of the negative under consideration. In this paper, we establish a method to scan negatives and to convert the scanned data to a calibrated RGB space, which is known colorimetrically. This method is based on the reconstruction of the original exposure conditions (i.e., original scene) which generated the negative. Since the characteristics of negative film are quite diverse, a special calibration is required for each combination of scanner and film type.

  13. When real life wind speed exceeds design wind assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Winther-Jensen, M.; Joergensen, E.R. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Most modern wind turbines are designed according to a standard or a set of standards to withstand the design loads with a defined survival probability. Mainly the loads are given by the wind conditions on the site defining the `design wind speeds`, normally including extreme wind speeds given as an average and a peak value. The extreme wind speeds are normally (e.g. in the upcoming IEC standard for wind turbine safety) defined as having a 50-year recurrence period. But what happens when the 100 or 10,000 year wind situation hits a wind turbine? Results on wind turbines of wind speeds higher than the extreme design wind speeds are presented based on experiences especially from the State of Gujarat in India. A description of the normal approach of designing wind turbines in accordance with the standards in briefly given in this paper with special focus on limitations and built-in safety levels. Based on that, other possibilities than just accepting damages on wind turbines exposed for higher than design wind speeds are mentioned and discussed. The presentation does not intend to give the final answer to this problem but is meant as an input to further investigations and discussions. (au)

  14. School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey

    Science.gov (United States)

    Sabanci, Ali

    2008-01-01

    This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…

  15. Coefficient Alpha as an Estimate of Test Reliability under Violation of Two Assumptions.

    Science.gov (United States)

    Zimmerman, Donald W.; And Others

    1993-01-01

    Coefficient alpha was examined through computer simulation as an estimate of test reliability under violation of two assumptions. Coefficient alpha underestimated reliability under violation of the assumption of essential tau-equivalence of subtest scores and overestimated it under violation of the assumption of uncorrelated subtest error scores.…

  16. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    Science.gov (United States)

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  17. Educational Technology as a Subversive Activity: Questioning Assumptions Related to Teaching and Leading with Technology

    Science.gov (United States)

    Kruger-Ross, Matthew J.; Holcomb, Lori B.

    2012-01-01

    The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…

  18. Exploring the Influence of Ethnicity, Age, and Trauma on Prisoners' World Assumptions

    Science.gov (United States)

    Gibson, Sandy

    2011-01-01

    In this study, the author explores world assumptions of prisoners, how these assumptions vary by ethnicity and age, and whether trauma history affects world assumptions. A random sample of young and old prisoners, matched for prison location, was drawn from the New Jersey Department of Corrections prison population. Age and ethnicity had…

  19. Turn customer input into innovation.

    Science.gov (United States)

    Ulwick, Anthony W

    2002-01-01

    It's difficult to find a company these days that doesn't strive to be customer-driven. Too bad, then, that most companies go about the process of listening to customers all wrong--so wrong, in fact, that they undermine innovation and, ultimately, the bottom line. What usually happens is this: Companies ask their customers what they want. Customers offer solutions in the form of products or services. Companies then deliver these tangibles, and customers just don't buy. The reason is simple--customers aren't expert or informed enough to come up with solutions. That's what your R&D team is for. Rather, customers should be asked only for outcomes--what they want a new product or service to do for them. The form the solutions take should be up to you, and you alone. Using Cordis Corporation as an example, this article describes, in fine detail, a series of effective steps for capturing, analyzing, and utilizing customer input. First come indepth interviews, in which a moderator works with customers to deconstruct a process or activity in order to unearth "desired outcomes." Addressing participants' comments one at a time, the moderator rephrases them to be both unambiguous and measurable. Once the interviews are complete, researchers then compile a comprehensive list of outcomes that participants rank in order of importance and degree to which they are satisfied by existing products. Finally, using a simple mathematical formula called the "opportunity calculation," researchers can learn the relative attractiveness of key opportunity areas. These data can be used to uncover opportunities for product development, to properly segment markets, and to conduct competitive analysis.

  20. Quantitative amyloid imaging using image-derived arterial input function.

    Directory of Open Access Journals (Sweden)

    Yi Su

    Full Text Available Amyloid PET imaging is an indispensable tool widely used in the investigation, diagnosis and monitoring of Alzheimer's disease (AD. Currently, a reference region based approach is used as the mainstream quantification technique for amyloid imaging. This approach assumes the reference region is amyloid free and has the same tracer influx and washout kinetics as the regions of interest. However, this assumption may not always be valid. The goal of this work is to evaluate an amyloid imaging quantification technique that uses arterial region of interest as the reference to avoid potential bias caused by specific binding in the reference region. 21 participants, age 58 and up, underwent Pittsburgh compound B (PiB PET imaging and MR imaging including a time-of-flight (TOF MR angiography (MRA scan and a structural scan. FreeSurfer based regional analysis was performed to quantify PiB PET data. Arterial input function was estimated based on coregistered TOF MRA using a modeling based technique. Regional distribution volume (VT was calculated using Logan graphical analysis with estimated arterial input function. Kinetic modeling was also performed using the estimated arterial input function as a way to evaluate PiB binding (DVRkinetic without a reference region. As a comparison, Logan graphical analysis was also performed with cerebellar cortex as reference to obtain DVRREF. Excellent agreement was observed between the two distribution volume ratio measurements (r>0.89, ICC>0.80. The estimated cerebellum VT was in line with literature reported values and the variability of cerebellum VT in the control group was comparable to reported variability using arterial sampling data. This study suggests that image-based arterial input function is a viable approach to quantify amyloid imaging data, without the need of arterial sampling or a reference region. This technique can be a valuable tool for amyloid imaging, particularly in population where reference

  1. Quantitative amyloid imaging using image-derived arterial input function.

    Science.gov (United States)

    Su, Yi; Blazey, Tyler M; Snyder, Abraham Z; Raichle, Marcus E; Hornbeck, Russ C; Aldea, Patricia; Morris, John C; Benzinger, Tammie L S

    2015-01-01

    Amyloid PET imaging is an indispensable tool widely used in the investigation, diagnosis and monitoring of Alzheimer's disease (AD). Currently, a reference region based approach is used as the mainstream quantification technique for amyloid imaging. This approach assumes the reference region is amyloid free and has the same tracer influx and washout kinetics as the regions of interest. However, this assumption may not always be valid. The goal of this work is to evaluate an amyloid imaging quantification technique that uses arterial region of interest as the reference to avoid potential bias caused by specific binding in the reference region. 21 participants, age 58 and up, underwent Pittsburgh compound B (PiB) PET imaging and MR imaging including a time-of-flight (TOF) MR angiography (MRA) scan and a structural scan. FreeSurfer based regional analysis was performed to quantify PiB PET data. Arterial input function was estimated based on coregistered TOF MRA using a modeling based technique. Regional distribution volume (VT) was calculated using Logan graphical analysis with estimated arterial input function. Kinetic modeling was also performed using the estimated arterial input function as a way to evaluate PiB binding (DVRkinetic) without a reference region. As a comparison, Logan graphical analysis was also performed with cerebellar cortex as reference to obtain DVRREF. Excellent agreement was observed between the two distribution volume ratio measurements (r>0.89, ICC>0.80). The estimated cerebellum VT was in line with literature reported values and the variability of cerebellum VT in the control group was comparable to reported variability using arterial sampling data. This study suggests that image-based arterial input function is a viable approach to quantify amyloid imaging data, without the need of arterial sampling or a reference region. This technique can be a valuable tool for amyloid imaging, particularly in population where reference normalization may

  2. Simplified subsurface modelling: data assimilation and violated model assumptions

    Science.gov (United States)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated

  3. Output-input coupling in thermally fluctuating biomolecular machines

    CERN Document Server

    Kurzynski, Michal; Chelminiak, Przemyslaw

    2011-01-01

    Biological molecular machines are proteins that operate under isothermal conditions hence are referred to as free energy transducers. They can be formally considered as enzymes that simultaneously catalyze two chemical reactions: the free energy-donating reaction and the free energy-accepting one. Most if not all biologically active proteins display a slow stochastic dynamics of transitions between a variety of conformational substates composing their native state. In the steady state, this dynamics is characterized by mean first-passage times between transition substates of the catalyzed reactions. On taking advantage of the assumption that each reaction proceeds through a single pair (the gate) of conformational transition substates of the enzyme-substrates complex, analytical formulas were derived for the flux-force dependence of the both reactions, the respective stalling forces and the degree of coupling between the free energy-accepting (output) reaction flux and the free energy-donating (input) one. Th...

  4. 'Being Explicit about Underlying Values, Assumptions and Views when Designing for Children in the IDC Community’

    DEFF Research Database (Denmark)

    Skovbjerg, Helle Marie

    2016-01-01

    In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting...... on those assumptions and the possible influences on their design decisions? How can we make the assumptions explicit, discuss them in the IDC community and use the discussion to develop higher quality design and research? The workshop will support discussion between researchers, designers and practitioners......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....

  5. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    Science.gov (United States)

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  6. Performance Analysis of Adaptive Volterra Filters in the Finite-Alphabet Input Case

    Directory of Open Access Journals (Sweden)

    Jaïdane Mériem

    2004-01-01

    Full Text Available This paper deals with the analysis of adaptive Volterra filters, driven by the LMS algorithm, in the finite-alphabet inputs case. A tailored approach for the input context is presented and used to analyze the behavior of this nonlinear adaptive filter. Complete and rigorous mean square analysis is provided without any constraining independence assumption. Exact transient and steady-state performances expressed in terms of critical step size, rate of transient decrease, optimal step size, excess mean square error in stationary mode, and tracking nonstationarities are deduced.

  7. Input estimation from measured structural response

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, Dustin [Los Alamos National Laboratory; Cross, Elizabeth [Los Alamos National Laboratory; Silva, Ramon A [Los Alamos National Laboratory; Farrar, Charles R [Los Alamos National Laboratory; Bement, Matt [Los Alamos National Laboratory

    2009-01-01

    This report will focus on the estimation of unmeasured dynamic inputs to a structure given a numerical model of the structure and measured response acquired at discrete locations. While the estimation of inputs has not received as much attention historically as state estimation, there are many applications where an improved understanding of the immeasurable input to a structure is vital (e.g. validating temporally varying and spatially-varying load models for large structures such as buildings and ships). In this paper, the introduction contains a brief summary of previous input estimation studies. Next, an adjoint-based optimization method is used to estimate dynamic inputs to two experimental structures. The technique is evaluated in simulation and with experimental data both on a cantilever beam and on a three-story frame structure. The performance and limitations of the adjoint-based input estimation technique are discussed.

  8. Input Method "Five Strokes": Advantages and Problems

    Directory of Open Access Journals (Sweden)

    Mateja PETROVČIČ

    2014-03-01

    Since the Five Stroke input method is easily accessible, simple to master and is not pronunciation-based, we would expect that the students will use it to input unknown characters. The survey comprises students of Japanology and Sinology at Department of Asian and African Studies, takes in consideration the grade of the respondent and therefore his/her knowledge of characters. This paper also discusses the impact of typeface to the accuracy of the input.

  9. Bilinearity in spatiotemporal integration of synaptic inputs.

    Directory of Open Access Journals (Sweden)

    Songting Li

    2014-12-01

    Full Text Available Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient [Formula: see text]. The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient [Formula: see text] is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse.

  10. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Science.gov (United States)

    Caricchi, Luca; Simpson, Guy; Schaltegger, Urs

    2016-04-01

    Magma fluxes in the Earth's crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes). Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions. Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  11. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Directory of Open Access Journals (Sweden)

    Luca eCaricchi

    2016-04-01

    Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  12. Computing Functions by Approximating the Input

    Science.gov (United States)

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  13. Statistical identification of effective input variables. [SCREEN

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications.

  14. Atmospheric Nitrogen input to the Kattegat

    DEFF Research Database (Denmark)

    Asman, W.A.H.; Hertel, O.; Berkowicz, R.

    1995-01-01

    An overview is given of the processes involved in the atmospheric deposition of nitrogen compounds. These processes are incorporated in an atmospheric transport model that is used to calculate the nitrogen input to the Kattegat, the sea area between Denmark and Sweden. The model results show...... that the total atmospheric nitrogen input to the Kattegat is approximately 960 kg N km(-2) yr(-1). The nitrogen input to the Kattegat is dominated by the wet depositions of NHx (42%) and NOy (30%). The contribution from the dry deposition of NHx is 17% and that of the dry deposition of NOy is 11......%. The contribution of the atmospheric input of nitrogen to the Kattegat is about 30% of the total input including the net transport from other sea areas, runoff etc....

  15. Input impedance characteristics of microstrip structures

    Directory of Open Access Journals (Sweden)

    A. I. Nazarko

    2015-06-01

    Full Text Available Introduction. Electromagnetic crystals (EC and EC-inhomogeneities are one of the main directions of microstrip devices development. In the article the input impedance characteristics of EC- and traditional microstrip inhomogeneities and filter based on EC-inhomogeneities are investigated. Transmission coefficient characteristics. Transmission coefficient characteristics of low impedance EC- and traditional inhomogeneities are considered. Characteristics are calculated in the software package Microwave Studio. It is shown that the efficiency of EC-inhomogeneity is much higher. Input impedance characteristics of low impedance inhomogeneities. Dependences of input impedance active and reactive parts of EC- and traditional inhomogeneities are given. Dependences of the active part illustrate significant low impedance transformation of nominal impedance. The conditions of impedance matching of structure and input medium are set. Input impedance characteristics of high impedance inhomogeneities. Input impedance characteristics of high impedance EC- and traditional inhomogeneities are considered. It was shown that the band of transformation by high impedance inhomogeneities is much narrower than one by low impedance inhomogeneities. Characteristics of the reflection coefficient of inhomogeneities are presented. Input impedance characteristics of narrowband filter. The structure of narrowband filter based on the scheme of Fabry-Perot resonator is presented. The structure of the filter is fulfilled by high impedance EC-inhomogeneities as a reflectors. Experimental and theoretical amplitude-frequency characteristics of the filter are presented. Input impedance characteristics of the filter are shown. Conclusions. Input impedance characteristics of the structure allow to analyse its wave properties, especially resonant. EC-inhomogeneity compared with traditional microstrip provide substantially more significant transformation of the the input impedance.

  16. Critical appraisal of assumptions in chains of model calculations used to project local climate impacts for adaptation decision support—the case of Baakse Beek

    Science.gov (United States)

    van der Sluijs, Jeroen P.; Arjan Wardekker, J.

    2015-04-01

    In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to

  17. The neuronal response at extended timescales: a linearized spiking input-output relation

    Directory of Open Access Journals (Sweden)

    Daniel eSoudry

    2014-04-01

    Full Text Available Many biological systems are modulated by unknown slow processes. This can severely hinder analysis - especially in excitable neurons, which are highly non-linear and stochastic systems. We show the analysis simplifies considerably if the input matches the sparse spiky nature of the output. In this case, a linearized spiking Input-Output (I/O relation can be derived semi-analytically, relating input spike trains to output spikes based on known biophysical properties. Using this I/O relation we obtain closed-form expressions for all second order statistics (input - internal state - output correlations and spectra, construct optimal linear estimators for the neuronal response and internal state and perform parameter identification. These results are guaranteed to hold, for a general stochastic biophysical neuron model, with only a few assumptions (mainly, timescale separation. We numerically test the resulting expressions for various models, and show that they hold well, even in cases where our assumptions fail to hold. In a companion paper we demonstrate how this approach enables us to fit a biophysical neuron model so it reproduces experimentally observed temporal firing statistics on days-long experiments.

  18. The neuronal response at extended timescales: a linearized spiking input-output relation.

    Science.gov (United States)

    Soudry, Daniel; Meir, Ron

    2014-01-01

    Many biological systems are modulated by unknown slow processes. This can severely hinder analysis - especially in excitable neurons, which are highly non-linear and stochastic systems. We show the analysis simplifies considerably if the input matches the sparse "spiky" nature of the output. In this case, a linearized spiking Input-Output (I/O) relation can be derived semi-analytically, relating input spike trains to output spikes based on known biophysical properties. Using this I/O relation we obtain closed-form expressions for all second order statistics (input - internal state - output correlations and spectra), construct optimal linear estimators for the neuronal response and internal state and perform parameter identification. These results are guaranteed to hold, for a general stochastic biophysical neuron model, with only a few assumptions (mainly, timescale separation). We numerically test the resulting expressions for various models, and show that they hold well, even in cases where our assumptions fail to hold. In a companion paper we demonstrate how this approach enables us to fit a biophysical neuron model so it reproduces experimentally observed temporal firing statistics on days-long experiments.

  19. Water resources and environmental input-output analysis and its key study issues: a review

    Science.gov (United States)

    YANG, Z.; Xu, X.

    2013-12-01

    inland water resources IOA. Recent internal study references related to environmental input-output table, pollution discharge analysis and environmental impact assessment had taken the leading position. Pollution discharge analysis mainly aiming at CO2 discharge had been regard as a new hotspot of environmental IOA. Environmental impact assessment was an important direction of inland environmental IOA in recent years. Key study issues including Domestic Technology Assumption(DTA) and Sectoral Aggregation(SA) had been mentioned remarkably. It was pointed out that multiply multi-region input-output analysis(MIOA) may be helpful to solve DTA. Because there was little study using effective analysis tools to quantify the bias of SA and the exploration of the appropriate sectoral aggregation degree was scarce, research dedicating to explore and solve these two key issues was deemed to be urgently needed. According to the study status, several points of outlook were proposed in the end.

  20. What influences children's conceptualizations of language input?

    Science.gov (United States)

    Plante, Elena; Vance, Rebecca; Moody, Amanda; Gerken, LouAnn

    2013-10-01

    Children learning language conceptualize the nature of input they receive in ways that allow them to understand and construct utterances they have never heard before. This study was designed to illuminate the types of information children with and without specific language impairment (SLI) focus on to develop their conceptualizations and whether they can rapidly shift their initial conceptualizations if provided with additional input. In 2 studies, preschool children with and without SLI were exposed to an artificial language, the characteristics of which allowed for various types of conceptualizations about its fundamental properties. After being familiarized with the language, children were asked to judge test strings that conformed to the input in 1 of 4 different ways. All children preferred test items that reflected a narrow conceptualization of the input (i.e., items most like those heard during familiarization). Children showed a strong preference for phonology as a defining property of the artificial language. Restructuring the input to the child could induce them to track word order information as well. Children tend toward narrow conceptualizations of language input, but the nature of their conceptualizations can be influenced by the nature of the input they receive.

  1. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  2. Observer-Based Robust Coordinated Control of Multiagent Systems With Input Saturation.

    Science.gov (United States)

    Wang, Xiaoling; Su, Housheng; Chen, Michael Z Q; Wang, Xiaofan

    2017-04-13

    This paper addresses the robust semiglobal coordinated control of multiple-input multiple-output multiagent systems with input saturation together with dead zone and input additive disturbance. Observer-based coordinated control protocol is constructed, by combining the parameterized low-and-high-gain feedback technique and the high-gain observer design approach. It is shown that, under some mild assumptions on agents' intrinsic dynamics, the robust semiglobal consensus or robust semiglobal swarm can be approached for undirected connected multiagent systems. Then, specific guidelines on the selection of the low-gain parameter, the high-gain parameter, and the high-gain observer gain have been provided. At last, numerical simulations are presented to illustrate the theoretical results.

  3. On Secrecy Rate Analysis of MIMO Wiretap Channels Driven by Finite-Alphabet Input

    CERN Document Server

    Bashar, Shafi; Xiao, Chengshan

    2011-01-01

    This work investigates the effect of finite-alphabet source input on the secrecy rate of a multi-antenna wiretap system. Existing works have characterized maximum achievable secrecy rate or secrecy capacity for single and multiple antenna systems based on Gaussian source signals and secrecy code. Despite the impracticality of Gaussian sources, the compact closed-form expression of mutual information between linear channel Gaussian input and corresponding output has led to broad application of Gaussian input assumption in physical secrecy analysis. For practical considerations, we study the effect of finite discrete-constellation on the achievable secrecy rate of multiple-antenna wire-tap channels. Our proposed precoding scheme converts the multi-antenna system into a bank of parallel channels. Based on this precoding strategy, we propose a decentralized power allocation algorithm based on dual decomposition for maximizing the achievable secrecy rate. In addition, we analyze the achievable secrecy rate for fin...

  4. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  5. Making Sense out of Sex Stereotypes in Advertising: A Feminist Analysis of Assumptions.

    Science.gov (United States)

    Ferrante, Karlene

    Sexism and racism in advertising have been well documented, but feminist research aimed at social change must go beyond existing content analyses to ask how advertising is created. Analysis of the "mirror assumption" (advertising reflects society) and the "gender assumption" (advertising speaks in a male voice to female…

  6. Comparative Interpretation of Classical and Keynesian Fiscal Policies (Assumptions, Principles and Primary Opinions

    Directory of Open Access Journals (Sweden)

    Engin Oner

    2015-06-01

    Full Text Available Adam Smith being its founder, in the Classical School, which gives prominence to supply and adopts an approach of unbiased finance, the economy is always in a state of full employment equilibrium. In this system of thought, the main philosophy of which is budget balance, that asserts that there is flexibility between prices and wages and regards public debt as an extraordinary instrument, the interference of the state with the economic and social life is frowned upon. In line with the views of the classical thought, the classical fiscal policy is based on three basic assumptions. These are the "Consumer State Assumption", the assumption accepting that "Public Expenditures are Always Ineffectual" and the assumption concerning the "Impartiality of the Taxes and Expenditure Policies Implemented by the State". On the other hand, the Keynesian School founded by John Maynard Keynes, gives prominence to demand, adopts the approach of functional finance, and asserts that cases of underemployment equilibrium and over-employment equilibrium exist in the economy as well as the full employment equilibrium, that problems cannot be solved through the invisible hand, that prices and wages are strict, the interference of the state is essential and at this point fiscal policies have to be utilized effectively.Keynesian fiscal policy depends on three primary assumptions. These are the assumption of "Filter State", the assumption that "public expenditures are sometimes effective and sometimes ineffective or neutral" and the assumption that "the tax, debt and expenditure policies of the state can never be impartial". 

  7. Implicit Assumptions in Special Education Policy: Promoting Full Inclusion for Students with Learning Disabilities

    Science.gov (United States)

    Kirby, Moira

    2017-01-01

    Introduction: Everyday millions of students in the United States receive special education services. Special education is an institution shaped by societal norms. Inherent in these norms are implicit assumptions regarding disability and the nature of special education services. The two dominant implicit assumptions evident in the American…

  8. Keeping Things Simple: Why the Human Development Index Should Not Diverge from Its Equal Weights Assumption

    Science.gov (United States)

    Stapleton, Lee M.; Garrod, Guy D.

    2007-01-01

    Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…

  9. Being Explicit about Underlying Values, Assumptions and Views when Designing for Children in the IDC Community

    DEFF Research Database (Denmark)

    Skovbjerg, Helle Marie; Bekker, Tilde; Barendregt, Wolmet

    2016-01-01

    on those assumptions and the possible influences on their design decisions? How can we make the assumptions explicit, discuss them in the IDC community and use the discussion to develop higher quality design and research? The workshop will support discussion between researchers, designers and practitioners...

  10. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    Science.gov (United States)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  11. Making Foundational Assumptions Transparent: Framing the Discussion about Group Communication and Influence

    Science.gov (United States)

    Meyers, Renee A.; Seibold, David R.

    2009-01-01

    In this article, the authors seek to augment Dean Hewes's (1986, 1996) intriguing bracketing and admirable larger effort to "return to basic theorizing in the study of group communication" by making transparent the foundational, and debatable, assumptions that underlie those models. Although these assumptions are addressed indirectly by Hewes, the…

  12. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Science.gov (United States)

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  13. Assessing Key Assumptions of Network Meta-Analysis: A Review of Methods

    Science.gov (United States)

    Donegan, Sarah; Williamson, Paula; D'Alessandro, Umberto; Tudur Smith, Catrin

    2013-01-01

    Background: Homogeneity and consistency assumptions underlie network meta-analysis (NMA). Methods exist to assess the assumptions but they are rarely and poorly applied. We review and illustrate methods to assess homogeneity and consistency. Methods: Eligible articles focussed on indirect comparison or NMA methodology. Articles were sought by…

  14. Teaching Lessons in Exclusion: Researchers' Assumptions and the Ideology of Normality

    Science.gov (United States)

    Benincasa, Luciana

    2012-01-01

    Filling in a research questionnaire means coming into contact with the researchers' assumptions. In this sense filling in a questionnaire may be described as a learning situation. In this paper I carry out discourse analysis of selected questionnaire items from a number of studies, in order to highlight underlying values and assumptions, and their…

  15. Scaling of global input-output networks

    Science.gov (United States)

    Liang, Sai; Qi, Zhengling; Qu, Shen; Zhu, Ji; Chiu, Anthony S. F.; Jia, Xiaoping; Xu, Ming

    2016-06-01

    Examining scaling patterns of networks can help understand how structural features relate to the behavior of the networks. Input-output networks consist of industries as nodes and inter-industrial exchanges of products as links. Previous studies consider limited measures for node strengths and link weights, and also ignore the impact of dataset choice. We consider a comprehensive set of indicators in this study that are important in economic analysis, and also examine the impact of dataset choice, by studying input-output networks in individual countries and the entire world. Results show that Burr, Log-Logistic, Log-normal, and Weibull distributions can better describe scaling patterns of global input-output networks. We also find that dataset choice has limited impacts on the observed scaling patterns. Our findings can help examine the quality of economic statistics, estimate missing data in economic statistics, and identify key nodes and links in input-output networks to support economic policymaking.

  16. Input data to run Landis-II

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The data are input data files to run the forest simulation model Landis-II for Isle Royale National Park. Files include: a) Initial_Comm, which includes the location...

  17. Existence conditions for unknown input functional observers

    Science.gov (United States)

    Fernando, T.; MacDougall, S.; Sreeram, V.; Trinh, H.

    2013-01-01

    This article presents necessary and sufficient conditions for the existence and design of an unknown input Functional observer. The existence of the observer can be verified by computing a nullspace of a known matrix and testing some matrix rank conditions. The existence of the observer does not require the satisfaction of the observer matching condition (i.e. Equation (16) in Hou and Muller 1992, 'Design of Observers for Linear Systems with Unknown Inputs', IEEE Transactions on Automatic Control, 37, 871-875), is not limited to estimating scalar functionals and allows for arbitrary pole placement. The proposed observer always exists when a state observer exists for the unknown input system, and furthermore, the proposed observer can exist even in some instances when an unknown input state observer does not exist.

  18. Combined LTP and LTD of modulatory inputs controls neuronal processing of primary sensory inputs.

    Science.gov (United States)

    Doiron, Brent; Zhao, Yanjun; Tzounopoulos, Thanos

    2011-07-20

    A hallmark of brain organization is the integration of primary and modulatory pathways by principal neurons. However, the pathway interactions that shape primary input processing remain unknown. We investigated this problem in mouse dorsal cochlear nucleus (DCN) where principal cells integrate primary, auditory nerve input with modulatory, parallel fiber input. Using a combined experimental and computational approach, we show that combined LTP and LTD of parallel fiber inputs to DCN principal cells and interneurons, respectively, broaden the time window within which synaptic inputs summate. Enhanced summation depolarizes the resting membrane potential and thus lowers the response threshold to auditory nerve inputs. Combined LTP and LTD, by preserving the variance of membrane potential fluctuations and the membrane time constant, fixes response gain and spike latency as threshold is lowered. Our data reveal a novel mechanism mediating adaptive and concomitant homeostatic regulation of distinct features of neuronal processing of sensory inputs.

  19. Inadequacies of TPR and Krashen's Input Hypothesis

    Institute of Scientific and Technical Information of China (English)

    Meng Meng; LI Laifa

    2008-01-01

    In this paper,the rationale of TPR and the Input Hypothesis of Krashen which justifies practices of TPR are reviewed and criticized in the light of evidence from teachers'observation of a long-term TPR project.It is argued that the effectiveness of TPR is compromised by its inadequate attention to the complexity of classroom interactions and children's cognition.The Input Hypothesis is believed that it oversimplified the cognitive dynamics of language learning.

  20. Land Scale, Input-Output and Income

    Institute of Scientific and Technical Information of China (English)

    Mengzhi; DENG

    2013-01-01

    Based on the investigation of production, inputs and income of tobacco farmers in 337 families in 10 counties of which the specialty is tobacco in Henan Province in 2010, the differences in the production, inputs and income were discussed. Results suggested that in terms of land yield rate and tobacco growers income, the suitable proportion of land for tobacco production in Henan Province is from 0.33 to 0.67 hm2.

  1. Neural Networks with Complex and Quaternion Inputs

    OpenAIRE

    Rishiyur, Adityan

    2006-01-01

    This article investigates Kak neural networks, which can be instantaneously trained, for complex and quaternion inputs. The performance of the basic algorithm has been analyzed and shown how it provides a plausible model of human perception and understanding of images. The motivation for studying quaternion inputs is their use in representing spatial rotations that find applications in computer graphics, robotics, global navigation, computer vision and the spatial orientation of instruments. ...

  2. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Science.gov (United States)

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  3. Assessing the assumption of symmetric proximity measures in the context of multidimensional scaling.

    Science.gov (United States)

    Kelley, Ken

    2004-01-01

    Applications of multidimensional scaling often make the assumption of symmetry for the population matrix of proximity measures. Although the likelihood of such an assumption holding true varies from one area of research to another, formal assessment of such an assumption has received little attention. The present article develops a nonparametric procedure that can be used in a confirmatory fashion or in an exploratory fashion in order to probabilistically assess the assumption of population symmetry for proximity measures in a multidimensional scaling context. The proposed procedure makes use of the bootstrap technique and alleviates the assumptions of parametric statistical procedures. Computer code for R and S-Plus is included in an appendix in order to carry out the proposed procedures.

  4. Output-input ratio in thermally fluctuating biomolecular machines.

    Science.gov (United States)

    Kurzynski, Michal; Torchala, Mieczyslaw; Chelminiak, Przemyslaw

    2014-01-01

    Biological molecular machines are proteins that operate under isothermal conditions and hence are referred to as free energy transducers. They can be formally considered as enzymes that simultaneously catalyze two chemical reactions: the free energy-donating (input) reaction and the free energy-accepting (output) one. Most if not all biologically active proteins display a slow stochastic dynamics of transitions between a variety of conformational substates composing their native state. This makes the description of the enzymatic reaction kinetics in terms of conventional rate constants insufficient. In the steady state, upon taking advantage of the assumption that each reaction proceeds through a single pair (the gate) of transition conformational substates of the enzyme-substrates complex, the degree of coupling between the output and the input reaction fluxes has been expressed in terms of the mean first-passage times on a conformational transition network between the distinguished substates. The theory is confronted with the results of random-walk simulations on the five-dimensional hypercube. The formal proof is given that, for single input and output gates, the output-input degree of coupling cannot exceed unity. As some experiments suggest such exceeding, looking for the conditions for increasing the degree of coupling value over unity challenges the theory. Performed simulations of random walks on several model networks involving more extended gates indicate that the case of the degree of coupling value higher than 1 is realized in a natural way on critical branching trees extended by long-range shortcuts. Such networks are scale-free and display the property of the small world. For short-range shortcuts, the networks are scale-free and fractal, representing a reasonable model for biomolecular machines displaying tight coupling, i.e., the degree of coupling equal exactly to unity. A hypothesis is stated that the protein conformational transition networks, as

  5. Significance of input correlations in striatal function.

    Directory of Open Access Journals (Sweden)

    Man Yi Yim

    2011-11-01

    Full Text Available The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia.

  6. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment.

  7. An Exploration of Dental Students' Assumptions About Community-Based Clinical Experiences.

    Science.gov (United States)

    Major, Nicole; McQuistan, Michelle R

    2016-03-01

    The aim of this study was to ascertain which assumptions dental students recalled feeling prior to beginning community-based clinical experiences and whether those assumptions were fulfilled or challenged. All fourth-year students at the University of Iowa College of Dentistry & Dental Clinics participate in community-based clinical experiences. At the completion of their rotations, they write a guided reflection paper detailing the assumptions they had prior to beginning their rotations and assessing the accuracy of their assumptions. For this qualitative descriptive study, the 218 papers from three classes (2011-13) were analyzed for common themes. The results showed that the students had a variety of assumptions about their rotations. They were apprehensive about working with challenging patients, performing procedures for which they had minimal experience, and working too slowly. In contrast, they looked forward to improving their clinical and patient management skills and knowledge. Other assumptions involved the site (e.g., the equipment/facility would be outdated; protocols/procedures would be similar to the dental school's). Upon reflection, students reported experiences that both fulfilled and challenged their assumptions. Some continued to feel apprehensive about treating certain patient populations, while others found it easier than anticipated. Students were able to treat multiple patients per day, which led to increased speed and patient management skills. However, some reported challenges with time management. Similarly, students were surprised to discover some clinics were new/updated although some had limited instruments and materials. Based on this study's findings about students' recalled assumptions and reflective experiences, educators should consider assessing and addressing their students' assumptions prior to beginning community-based dental education experiences.

  8. Some Finite Sample Properties and Assumptions of Methods for Determining Treatment Effects

    DEFF Research Database (Denmark)

    Petrovski, Erik

    2016-01-01

    for determining treatment effects were chosen: ordinary least squares regression, propensity score matching, and inverse probability weighting. The assumptions and properties tested across these methods are: unconfoundedness, differences in average treatment effects and treatment effects on the treated, overlap...... will compare assumptions and properties of select methods for determining treatment effects with Monte Carlo simulation. The comparison will highlight the pros and cons of using one method over another and the assumptions that researchers need to make for the method they choose.Three popular methods...

  9. Troubling 'lived experience': a post-structural critique of mental health nursing qualitative research assumptions.

    Science.gov (United States)

    Grant, A

    2014-08-01

    Qualitative studies in mental health nursing research deploying the 'lived experience' construct are often written on the basis of conventional qualitative inquiry assumptions. These include the presentation of the 'authentic voice' of research participants, related to their 'lived experience' and underpinned by a meta-assumption of the 'metaphysics of presence'. This set of assumptions is critiqued on the basis of contemporary post-structural qualitative scholarship. Implications for mental health nursing qualitative research emerging from this critique are described in relation to illustrative published work, and some benefits and challenges for researchers embracing post-structural sensibilities are outlined.

  10. Tests of the frozen-flux and tangentially geostrophic assumptions using magnetic satellite data

    DEFF Research Database (Denmark)

    Chulliat, A.; Olsen, Nils; Sabaka, T.

    the very large number of flows explaining the observed secular variation under the frozen-flux assumption alone. More recently, it has been shown that the combined frozen-flux and tangentially geostrophic assumptions translate into constraints on the secular variation whose mathematics are now well...... understood. Using these constraints, we test the combined frozen-flux and tangentially geostrophic assumptions against recent, high-precision magnetic data provided by the and CHAMP satellites. The methodology involves building constrained field models using least-squares methods. Two types of models...

  11. DC SQUIDS with planar input coils

    Energy Technology Data Exchange (ETDEWEB)

    Pegrum, C.M.; Donaldson, G.B.; Hutson, D.; Tugwell, A.

    1985-03-01

    We describe the key parts of our recent work to develop a planar thin-film DC SQUID with a closely-coupled spiral input coil. Our aim has been to make a device that is superior to present RF SQUID sensors in terms of sensitivity and long-term reliability. To be compatible with an RF SQUID the inductance of the input coils must be relatively large, typically 2..mu..H, and the input noise current in the white noise region should be below 10pA Hz /SUP -1/2/ . A low level of 1/f noise is also necessary for many applications and should be achieved without the use of complex noisecancelling circuitry. Our devices meet these criteria. We include a description of work on window and edge junction fabrication using ion beam cleaning, thermal oxidation and RF plasma processing.

  12. Harmonize input selection for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  13. Nonlinearities with Non-Gaussian Inputs.

    Science.gov (United States)

    1978-03-01

    possessing a spectral density function . a constant. Then Jet arc tan [G(t)J be the input. By Theorem 3 this input is not bandlimited; and if The rando...such that the absolute ,,~~ovalue of any point in the spectrum is less than N. If the Gaussian process X(t) possesses a H ~ ) spectral density function (i.e...Gaussian process and th. series ii convergent pointvise as veil X(t ) possesses a spectral density function . as in an sense (51. Let z~( ) and g2

  14. Input/Output Subroutine Library Program

    Science.gov (United States)

    Collier, James B.

    1988-01-01

    Efficient, easy-to-use program moved easily to different computers. Purpose of NAVIO, Input/Output Subroutine Library, provides input/output package of software for FORTRAN programs that is portable, efficient, and easy to use. Implemented as hierarchy of libraries. At bottom is very small library containing only non-portable routines called "I/O Kernel." Design makes NAVIO easy to move from one computer to another, by simply changing kernel. NAVIO appropriate for software system of almost any size wherein different programs communicate through files.

  15. A new scenario framework for climate change research: the concept of shared climate policy assumptions

    NARCIS (Netherlands)

    Kriegler, E.; Edmonds, J.; Hallegatte, S.; Ebi, K.L.; Kram, T.; Riahi, K.; Winkler, J.; van Vuuren, Detlef|info:eu-repo/dai/nl/11522016X

    2014-01-01

    The new scenario framework facilitates the coupling of multiple socioeconomic reference pathways with climate model products using the representative concentration pathways. This will allow for improved assessment of climate impacts, adaptation and mitigation. Assumptions about climate policy play a

  16. Who needs the assumption of opportunistic behavior? Transaction cost economics does not!

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2000-01-01

    The assumption of opportunistic behavior, familiar from transaction cost economics, has been and remains highly controversial. But opportunistic behavior, albeit undoubtedly an extremely important form of motivation, is not a necessary condition for the contractual problems studied by transaction...

  17. Washington International Renewable Energy Conference (WIREC) 2008 Pledges. Methodology and Assumptions Summary

    Energy Technology Data Exchange (ETDEWEB)

    Babiuch, Bill [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bilello, Daniel E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cowlin, Shannon C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wise, Alison [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2008-08-01

    This report describes the methodology and assumptions used by NREL in quantifying the potential CO2 reductions resulting from more than 140 governments, international organizations, and private-sector representatives pledging to advance the uptake of renewable energy.

  18. Learning disabilities theory and Soviet psychology: a comparison of basic assumptions.

    Science.gov (United States)

    Coles, G S

    1982-09-01

    Critics both within and outside the Learning Disabilities (LD) field have pointed to the weaknesses of LD theory. Beginning with the premise that a significant problem of LD theory has been its failure to explore fully its fundamental assumptions, this paper examines a number of these assumptions about individual and social development, cognition, and learning. These assumptions are compared with a contrasting body of premises found in Soviet psychology, particularly in the works of Vygotsky, Leontiev, and Luria. An examination of the basic assumptions of LD theory and Soviet psychology shows that a major difference lies in their respective nondialectical and dialectical interpretation of the relationship of social factors and cognition, learning, and neurological development.

  19. An Analysis of Input Hypothesis in English Teaching

    Institute of Scientific and Technical Information of China (English)

    赖菲菲

    2016-01-01

    Input plays a significant role in the process of foreign language teaching and learning. One of the most important studies about input is Krashen's Input Hypothesis, which emphasizes the importance of comprehensive input in foreign language teaching and learning. This paper aims to study the significance of Input Hypothesis and its application to English teaching.

  20. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    Directory of Open Access Journals (Sweden)

    Hazim Adnan Hashim

    2016-09-01

    Full Text Available The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led many individuals to build new kind of beliefs and assumptions about themselves and the world. Many writers have written about the human ordeals that followed this incident. Don DeLillo’s Falling Man reflects the traumatic repercussions of this disaster on Americans’ fundamental assumptions. The objective of this study is to examine the novel from the traumatic perspective that has afflicted the victims’ fundamental understandings of the world and the self. Individuals’ fundamental understandings could be changed or modified due to exposure to certain types of events like war, terrorism, political violence or even the sense of alienation. The Assumptive World theory of Ronnie Janoff-Bulman will be used as a framework to study the traumatic experience of the characters in Falling Man. The significance of the study lies in providing a new perception to the field of trauma that can help trauma victims to adopt alternative assumptions or reshape their previous ones to heal from traumatic effects.

  1. Unrealistic Assumptions in Economics: an Analysis under the Logic of Socioeconomic Processes

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2014-11-01

    Full Text Available The realism of assumptions is an ongoing debate within the philosophy of economics. One of the most referenced papers in this matter belongs to Milton Friedman. He defends the use of unrealistic assumptions, not only because of a pragmatic issue, but also the intrinsic difficulties of determining the extent of realism. On the other hand, realists have criticized (and still do today the use of unrealistic assumptions - such as the assumption of rational choice, perfect information, homogeneous goods, etc. However, they did not accompany their statements with a proper epistemological argument that supports their positions. In this work it is expected to show that the realism of (a particular sort of assumptions is clearly relevant when examining economic models, since the system under study (the real economies is not compatible with logic of invariance and of mechanisms, but with the logic of possibility trees. Because of this, models will not function as tools for predicting outcomes, but as representations of alternative scenarios, whose similarity to the real world will be examined in terms of the verisimilitude of a class of model assumptions

  2. Post-traumatic stress and world assumptions: the effects of religious coping.

    Science.gov (United States)

    Zukerman, Gil; Korn, Liat

    2014-12-01

    Religiosity has been shown to moderate the negative effects of traumatic event experiences. The current study was designed to examine the relationship between post-traumatic stress (PTS) following traumatic event exposure; world assumptions defined as basic cognitive schemas regarding the world; and self and religious coping conceptualized as drawing on religious beliefs and practices for understanding and dealing with life stressors. This study examined 777 Israeli undergraduate students who completed several questionnaires which sampled individual world assumptions and religious coping in addition to measuring PTS, as manifested by the PTSD check list. Results indicate that positive religious coping was significantly associated with more positive world assumptions, while negative religious coping was significantly associated with more negative world assumptions. Additionally, negative world assumptions were significantly associated with more avoidance symptoms, while reporting higher rates of traumatic event exposure was significantly associated with more hyper-arousal. These findings suggest that religious-related cognitive schemas directly affect world assumptions by creating protective shields that may prevent the negative effects of confronting an extreme negative experience.

  3. Assessing framing assumptions in quantitative health impact assessments: a housing intervention example.

    Science.gov (United States)

    Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M

    2013-09-01

    Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage.

  4. Declarative Semantics of Input Consuming Logic Programs

    NARCIS (Netherlands)

    Bossi, Annalisa; Cocco, Nicoletta; Etalle, Sandro; Rossi, Sabina; Bruynooghe, Maurice; Lau, Kung-Kia

    2004-01-01

    Most logic programming languages actually provide some kind of dynamic scheduling to increase the expressive power and to control execution. Input consuming derivations have been introduced to describe dynamic scheduling while abstracting from the technical details. We review and compare the differe

  5. Input and age effects: Quo vadis?

    NARCIS (Netherlands)

    Weerman, F.

    2014-01-01

    The article discusses the important role played input in language acquisition. Topics include the difficulty in obtaining the difference between groups and languages, the visibility of the success of children concerning inflection in their knowledge, and the description of lexical mistake for monoli

  6. Selecting training inputs via greedy rank covering

    Energy Technology Data Exchange (ETDEWEB)

    Buchsbaum, A.L.; Santen, J.P.H. van [AT& T Bell Laboratories, Murray Hill, NJ (United States)

    1996-12-31

    We present a general method for selecting a small set of training inputs, the observations of which will suffice to estimate the parameters of a given linear model. We exemplify the algorithm in terms of predicting segmental duration of phonetic-segment feature vectors in a text-to-speech synthesizer, but the algorithm will work for any linear model and its associated domain.

  7. The Contrast Theory of negative input.

    Science.gov (United States)

    Saxton, M

    1997-02-01

    Beliefs about whether or not children receive corrective input for grammatical errors depend crucially on how one defines the concept of correction. Arguably, previous conceptualizations do not provide a viable basis for empirical research (Gold, 1967; Brown & Hanlon, 1970; Hirsh-Pasek, Treiman & Schneiderman, 1984). Within the Contrast Theory of negative input, an alternative definition of negative evidence is offered, based on the idea that the unique discourse structure created in the juxtaposition of child error and adult correct form can reveal to the child the contrast, or conflict, between the two forms, and hence provide a basis for rejecting the erroneous form. A within-subjects experimental design was implemented for 36 children (mean age 5;0), in order to compare the immediate effects of negative evidence with those of positive input, on the acquisition of six novel irregular past tense forms. Children reproduced the correct irregular model more often, and persisted with fewer errors, following negative evidence rather than positive input.

  8. Capital Power:From Input to Output

    Institute of Scientific and Technical Information of China (English)

    You Wanlong; Alice

    2009-01-01

    @@ After thirty yeas "going out" of China overseas investment,we learn from our failed lessons and also successful experience.Chinese enterprises are now standing at a new starting point of "going out".China is transforming from "capital input power" to "capital output power".

  9. Treatments of Precipitation Inputs to Hydrologic Models

    Science.gov (United States)

    Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...

  10. Input and Intake in Language Acquisition

    Science.gov (United States)

    Gagliardi, Ann C.

    2012-01-01

    This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…

  11. Input and Intake in Language Acquisition

    Science.gov (United States)

    Gagliardi, Ann C.

    2012-01-01

    This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…

  12. Declarative Semantics of Input Consuming Logic Programs

    NARCIS (Netherlands)

    Bossi, Annalisa; Cocco, Nicoletta; Etalle, Sandro; Rossi, Sabina; Bruynooghe, Maurice; Lau, Kung-Kia

    2004-01-01

    Most logic programming languages actually provide some kind of dynamic scheduling to increase the expressive power and to control execution. Input consuming derivations have been introduced to describe dynamic scheduling while abstracting from the technical details. We review and compare the differe

  13. Programmable Input for Nanomagnetic Logic Devices

    Directory of Open Access Journals (Sweden)

    Schmitt-Landsiedel D.

    2013-01-01

    Full Text Available A programmable magnetic input, based on the magnetic interaction of a soft and hard magnetic layer is presented for the first time. Therefore, a single-domain Co/Pt nanomagnet is placed on top of one end of a permalloy bar, separated by a thin dielectric layer. The permalloy bar of the introduced input structure is magnetized by weak easy-axis in-plane fields. Acting like a ’magnetic amplifier’, the generated fringing fields of the permalloy pole are strong enough to control the magnetization of the superimposed Co/Pt nanomagnets, which have high crystalline perpendicular magnetic anisotropy. This magnetostatic interaction results in a shift of the hysteresis curve of the Co/Pt nanomagnet, measured by magneto-optical Kerr microscopy. The Co/Pt nanomagnet is fixed by the fringing field of the permalloy and thereby not affected by the magnetic power clock of the Nanomagnetic Logic system. MFM measurements verify the functionality of the programmable magnetic input structure. The fringing fields are extracted from micromagnetic simulations and are in good agreement with experimental results. The introduced input structure enables switching the logic functionality of the majority gate from NAND to NOR during runtime, offering programmable Nanomagnetic Logic.

  14. Leaders’ receptivity to subordinates’ creative input: the role of achievement goals and composition of creative input

    NARCIS (Netherlands)

    Sijbom, R.B.L.; Janssen, O.; van Yperen, N.W.

    2015-01-01

    We identified leaders’ achievement goals and composition of creative input as important factors that can clarify when and why leaders are receptive to, and supportive of, subordinates’ creative input. As hypothesized, in two experimental studies, we found that relative to mastery goal leaders, perfo

  15. Leaders’ receptivity to subordinates’ creative input: the role of achievement goals and composition of creative input

    NARCIS (Netherlands)

    Sijbom, R.B.L.; Janssen, O.; van Yperen, N.W.

    2015-01-01

    We identified leaders’ achievement goals and composition of creative input as important factors that can clarify when and why leaders are receptive to, and supportive of, subordinates’ creative input. As hypothesized, in two experimental studies, we found that relative to mastery goal leaders, perfo

  16. Leaders’ receptivity to subordinates’ creative input: the role of achievement goals and composition of creative input

    NARCIS (Netherlands)

    R.B.L. Sijbom; O. Janssen; N.W. van Yperen

    2014-01-01

    We identified leaders’ achievement goals and composition of creative input as important factors that can clarify when and why leaders are receptive to, and supportive of, subordinates’ creative input. As hypothesized, in two experimental studies, we found that relative to mastery goal leaders, perfo

  17. Leaders’ receptivity to subordinates’ creative input: the role of achievement goals and composition of creative input

    NARCIS (Netherlands)

    Sijbom, R.B.L.; Janssen, O.; van Yperen, N.W.

    2015-01-01

    We identified leaders’ achievement goals and composition of creative input as important factors that can clarify when and why leaders are receptive to, and supportive of, subordinates’ creative input. As hypothesized, in two experimental studies, we found that relative to mastery goal leaders,

  18. Adaptive observer for the joint estimation of parameters and input for a coupled wave PDE and infinite dimensional ODE system

    KAUST Repository

    Belkhatir, Zehor

    2016-08-05

    This paper deals with joint parameters and input estimation for coupled PDE-ODE system. The system consists of a damped wave equation and an infinite dimensional ODE. This model describes the spatiotemporal hemodynamic response in the brain and the objective is to characterize brain regions using functional Magnetic Resonance Imaging (fMRI) data. For this reason, we propose an adaptive estimator and prove the asymptotic convergence of the state, the unknown input and the unknown parameters. The proof is based on a Lyapunov approach combined with a priori identifiability assumptions. The performance of the proposed observer is illustrated through some simulation results.

  19. Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions

    Science.gov (United States)

    Wang, XiaoCong; Liu, YiMin; Bao, Qing

    2016-01-01

    Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and

  20. Contemporary assumptions on human nature and work and approach to human potential managing

    Directory of Open Access Journals (Sweden)

    Vujić Dobrila

    2006-01-01

    Full Text Available A general problem of this research is to identify if there is a relationship between the assumption on human nature and work (Mcgregor, Argyris, Schein, Steers and Porter and a general organizational model preference, as well as a mechanism of human resource management? This research was carried out in 2005/2006. The sample consisted of 317 subjects (197 managers, 105 highly educated subordinates and 15 entrepreneurs in 7 big enterprises in a group of small business enterprises differentiating in terms of the entrepreneur’s structure and a type of activity. A general hypothesis "that assumptions on human nature and work are statistically significant in connection to the preference approach (models, of work motivation commitment", has been confirmed. A specific hypothesis have been also confirmed: ·The assumptions on a human as a rational economic being are statistically significant in correlation with only two mechanisms of traditional models, the mechanism of method work control and the working discipline mechanism. ·Statistically significant assumptions on a human as a social being are correlated with all mechanisms of engaging employees, which belong to the model of the human relations, except the mechanism introducing the adequate type of prizes for all employees independently of working results. ·The assumptions on a human as a creative being are statistically significant, positively correlating with preference of two mechanisms belonging to the human resource model by investing into education and training and making conditions for the application of knowledge and skills. The young with assumptions on a human as a creative being prefer much broader repertoire of mechanisms belonging to the human resources model from the remaining category of subjects in the pattern. The connection between the assumption on human nature and preference models of engaging appears especially in the sub-pattern of managers, in the category of young subjects

  1. Implications of Krashen’s Input Hypothesis and Affective Filter Hypothe-sis in College English Teaching

    Institute of Scientific and Technical Information of China (English)

    张妮

    2014-01-01

    American linguist Krashen proposed the input hypothesis and affective filter hypothesis which has influenced linguistic world deeply. These two assumptions play an important role in improving learners' language ability in the process of second lan⁃guage acquisition. Applying two hypothesis theories to college English teaching, teachers ’aim is to establish a new mode of lan⁃guage teaching and improve the efficiency of college English teaching.

  2. Rank correlation plots for use with correlated input variables in simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Iman, R.L.; Davenport, J.M.

    1980-11-01

    A method for inducing a desired rank correlation matrix on multivariate input vectors for simulation studies has recently been developed by Iman and Conover (SAND 80-0157). The primary intention of this procedure is to produce correlated input variables for use with computer models. Since this procedure is distribution free and allows the exact marginal distributions to remain intact, it can be used with any marginal distributions for which it is reasonable to think in terms of correlation. A series of rank correlation plots based on this procedure when the marginal distributions are normal, lognormal, uniform, and loguniform is presented. These plots provide a convenient tool for both aiding the modeler in determining the degree of dependence among variables (rather than guessing) and communicating with the modeler the effect of different correlation assumptions. 12 figures, 10 tables.

  3. Adaptive Neural Control of Uncertain MIMO Nonlinear Systems With State and Input Constraints.

    Science.gov (United States)

    Chen, Ziting; Li, Zhijun; Chen, C L Philip

    2016-03-17

    An adaptive neural control strategy for multiple input multiple output nonlinear systems with various constraints is presented in this paper. To deal with the nonsymmetric input nonlinearity and the constrained states, the proposed adaptive neural control is combined with the backstepping method, radial basis function neural network, barrier Lyapunov function (BLF), and disturbance observer. By ensuring the boundedness of the BLF of the closed-loop system, it is demonstrated that the output tracking is achieved with all states remaining in the constraint sets and the general assumption on nonsingularity of unknown control coefficient matrices has been eliminated. The constructed adaptive neural control has been rigorously proved that it can guarantee the semiglobally uniformly ultimate boundedness of all signals in the closed-loop system. Finally, the simulation studies on a 2-DOF robotic manipulator system indicate that the designed adaptive control is effective.

  4. Robust state estimation for uncertain linear systems with deterministic input signals

    Institute of Scientific and Technical Information of China (English)

    Huabo LIU; Tong ZHOU

    2014-01-01

    In this paper, we investigate state estimations of a dynamical system in which not only process and measurement noise, but also parameter uncertainties and deterministic input signals are involved. The sensitivity penalization based robust state estimation is extended to uncertain linear systems with deterministic input signals and parametric uncertainties which may nonlinearly affect a state-space plant model. The form of the derived robust estimator is similar to that of the well-known Kalman filter with a comparable computational complexity. Under a few weak assumptions, it is proved that though the derived state estimator is biased, the bound of estimation errors is finite and the covariance matrix of estimation errors is bounded. Numerical simulations show that the obtained robust filter has relatively nice estimation performances.

  5. World assumptions, posttraumatic stress and quality of life after a natural disaster: A longitudinal study

    Directory of Open Access Journals (Sweden)

    Nygaard Egil

    2012-06-01

    Full Text Available Abstract Background Changes in world assumptions are a fundamental concept within theories that explain posttraumatic stress disorder. The objective of the present study was to gain a greater understanding of how changes in world assumptions are related to quality of life and posttraumatic stress symptoms after a natural disaster. Methods A longitudinal study of 574 Norwegian adults who survived the Southeast Asian tsunami in 2004 was undertaken. Multilevel analyses were used to identify which factors at six months post-tsunami predicted quality of life and posttraumatic stress symptoms two years post-tsunami. Results Good quality of life and posttraumatic stress symptoms were negatively related. However, major differences in the predictors of these outcomes were found. Females reported significantly higher quality of life and more posttraumatic stress than men. The association between level of exposure to the tsunami and quality of life seemed to be mediated by posttraumatic stress. Negative perceived changes in the assumption “the world is just” were related to adverse outcome in both quality of life and posttraumatic stress. Positive perceived changes in the assumptions “life is meaningful” and “feeling that I am a valuable human” were associated with higher levels of quality of life but not with posttraumatic stress. Conclusions Quality of life and posttraumatic stress symptoms demonstrate differences in their etiology. World assumptions may be less specifically related to posttraumatic stress than has been postulated in some cognitive theories.

  6. Are nest sites actively chosen? Testing a common assumption for three non-resource limited birds

    Science.gov (United States)

    Goodenough, A. E.; Elliot, S. L.; Hart, A. G.

    2009-09-01

    Many widely-accepted ecological concepts are simplified assumptions about complex situations that remain largely untested. One example is the assumption that nest-building species choose nest sites actively when they are not resource limited. This assumption has seen little direct empirical testing: most studies on nest-site selection simply assume that sites are chosen actively (and seek explanations for such behaviour) without considering that sites may be selected randomly. We used 15 years of data from a nestbox scheme in the UK to test the assumption of active nest-site choice in three cavity-nesting bird species that differ in breeding and migratory strategy: blue tit ( Cyanistes caeruleus), great tit ( Parus major) and pied flycatcher ( Ficedula hypoleuca). Nest-site selection was non-random (implying active nest-site choice) for blue and great tits, but not for pied flycatchers. We also considered the relative importance of year-specific and site-specific factors in determining occupation of nest sites. Site-specific factors were more important than year-specific factors for the tit species, while the reverse was true for pied flycatchers. Our results show that nest-site selection, in birds at least, is not always the result of active choice, such that choice should not be assumed automatically in studies of nesting behaviour. We use this example to highlight the need to test key ecological assumptions empirically, and the importance of doing so across taxa rather than for single "model" species.

  7. Assumptions and moral understanding of the wish to hasten death: a philosophical review of qualitative studies.

    Science.gov (United States)

    Rodríguez-Prat, Andrea; van Leeuwen, Evert

    2017-07-01

    It is not uncommon for patients with advanced disease to express a wish to hasten death (WTHD). Qualitative studies of the WTHD have found that such a wish may have different meanings, none of which can be understood outside of the patient's personal and sociocultural background, or which necessarily imply taking concrete steps to ending one's life. The starting point for the present study was a previous systematic review of qualitative studies of the WTHD in advanced patients. Here we analyse in greater detail the statements made by patients included in that review in order to examine their moral understandings and representations of illness, the dying process and death. We identify and discuss four classes of assumptions: (1) assumptions related to patients' moral understandings in terms of dignity, autonomy and authenticity; (2) assumptions related to social interactions; (3) assumptions related to the value of life; and (4) assumptions related to medicalisation as an overarching context within which the WTHD is expressed. Our analysis shows how a philosophical perspective can add to an understanding of the WTHD by taking into account cultural and anthropological aspects of the phenomenon. We conclude that the knowledge gained through exploring patients' experience and moral understandings in the end-of-life context may serve as the basis for care plans and interventions that can help them experience their final days as a meaningful period of life, restoring some sense of personal dignity in those patients who feel this has been lost.

  8. Multimodal interfaces with voice and gesture input

    Energy Technology Data Exchange (ETDEWEB)

    Milota, A.D.; Blattner, M.M.

    1995-07-20

    The modalities of speech and gesture have different strengths and weaknesses, but combined they create synergy where each modality corrects the weaknesses of the other. We believe that a multimodal system such a one interwining speech and gesture must start from a different foundation than ones which are based solely on pen input. In order to provide a basis for the design of a speech and gesture system, we have examined the research in other disciplines such as anthropology and linguistics. The result of this investigation was a taxonomy that gave us material for the incorporation of gestures whose meanings are largely transparent to the users. This study describes the taxonomy and gives examples of applications to pen input systems.

  9. Controlling Synfire Chain by Inhibitory Synaptic Input

    Science.gov (United States)

    Shinozaki, Takashi; Câteau, Hideyuki; Urakubo, Hidetoshi; Okada, Masato

    2007-04-01

    The propagation of highly synchronous firings across neuronal networks, called the synfire chain, has been actively studied both theoretically and experimentally. The temporal accuracy and remarkable stability of the propagation have been repeatedly examined in previous studies. However, for such a mode of signal transduction to play a major role in processing information in the brain, the propagation should also be controlled dynamically and flexibly. Here, we show that inhibitory but not excitatory input can bidirectionally modulate the propagation, i.e., enhance or suppress the synchronous firings depending on the timing of the input. Our simulations based on the Hodgkin-Huxley neuron model demonstrate this bidirectional modulation and suggest that it should be achieved with any biologically inspired modeling. Our finding may help describe a concrete scenario of how multiple synfire chains lying in a neuronal network are appropriately controlled to perform significant information processing.

  10. Virtual input device with diffractive optical element

    Science.gov (United States)

    Wu, Ching Chin; Chu, Chang Sheng

    2005-02-01

    As a portable device, such as PDA and cell phone, a small size build in virtual input device is more convenient for complex input demand. A few years ago, a creative idea called 'virtual keyboard' is announced, but up to now there's still no mass production method for this idea. In this paper we'll show the whole procedure of making a virtual keyboard. First of all is the HOE (Holographic Optical Element) design of keyboard image which yields a fan angle about 30 degrees, and then use the electron forming method to copy this pattern in high precision. And finally we can product this element by inject molding. With an adaptive lens design we can get a well correct keyboard image in distortion and a wilder fan angle about 70 degrees. With a batter alignment of HOE pattern lithography, we"re sure to get higher diffraction efficiency.

  11. Neuroprosthetics and the science of patient input.

    Science.gov (United States)

    Benz, Heather L; Civillico, Eugene F

    2017-01-01

    Safe and effective neuroprosthetic systems are of great interest to both DARPA and CDRH, due to their innovative nature and their potential to aid severely disabled populations. By expanding what is possible in human-device interaction, these devices introduce new potential benefits and risks. Therefore patient input, which is increasingly important in weighing benefits and risks, is particularly relevant for this class of devices. FDA has been a significant contributor to an ongoing stakeholder conversation about the inclusion of the patient voice, working collaboratively to create a new framework for a patient-centered approach to medical device development. This framework is evolving through open dialogue with researcher and patient communities, investment in the science of patient input, and policymaking that is responsive to patient-centered data throughout the total product life cycle. In this commentary, we will discuss recent developments in patient-centered benefit-risk assessment and their relevance to the development of neural prosthetic systems.

  12. Model based optimization of EMC input filters

    Energy Technology Data Exchange (ETDEWEB)

    Raggl, K; Kolar, J. W. [Swiss Federal Institute of Technology, Power Electronic Systems Laboratory, Zuerich (Switzerland); Nussbaumer, T. [Levitronix GmbH, Zuerich (Switzerland)

    2008-07-01

    Input filters of power converters for compliance with regulatory electromagnetic compatibility (EMC) standards are often over-dimensioned in practice due to a non-optimal selection of number of filter stages and/or the lack of solid volumetric models of the inductor cores. This paper presents a systematic filter design approach based on a specific filter attenuation requirement and volumetric component parameters. It is shown that a minimal volume can be found for a certain optimal number of filter stages for both the differential mode (DM) and common mode (CM) filter. The considerations are carried out exemplarily for an EMC input filter of a single phase power converter for the power levels of 100 W, 300 W, and 500 W. (author)

  13. Solar wind-magnetosphere energy input functions

    Energy Technology Data Exchange (ETDEWEB)

    Bargatze, L.F.; McPherron, R.L.; Baker, D.N.

    1985-01-01

    A new formula for the solar wind-magnetosphere energy input parameter, P/sub i/, is sought by applying the constraints imposed by dimensional analysis. Applying these constraints yields a general equation for P/sub i/ which is equal to rho V/sup 3/l/sub CF//sup 2/F(M/sub A/,theta) where, rho V/sup 3/ is the solar wind kinetic energy density and l/sub CF//sup 2/ is the scale size of the magnetosphere's effective energy ''collection'' region. The function F which depends on M/sub A/, the Alfven Mach number, and on theta, the interplanetary magnetic field clock angle is included in the general equation for P/sub i/ in order to model the magnetohydrodynamic processes which are responsible for solar wind-magnetosphere energy transfer. By assuming the form of the function F, it is possible to further constrain the formula for P/sub i/. This is accomplished by using solar wind data, geomagnetic activity indices, and simple statistical methods. It is found that P/sub i/ is proportional to (rho V/sup 2/)/sup 1/6/VBG(theta) where, rho V/sup 2/ is the solar wind dynamic pressure and VBG(theta) is a rectified version of the solar wind motional electric field. Furthermore, it is found that G(theta), the gating function which modulates the energy input to the magnetosphere, is well represented by a ''leaky'' rectifier function such as sin/sup 4/(theta/2). This function allows for enhanced energy input when the interplanetary magnetic field is oriented southward. This function also allows for some energy input when the interplanetary magnetic field is oriented northward. 9 refs., 4 figs.

  14. Sensory Synergy as Environmental Input Integration

    Directory of Open Access Journals (Sweden)

    Fady eAlnajjar

    2015-01-01

    Full Text Available The development of a method to feed proper environmental inputs back to the central nervous system (CNS remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with 9 healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis’ sensory system to make the controller simpler

  15. EMOWARS: INTERACTIVE GAME INPUT MENGGUNAKAN EKSPRESI WAJAH

    Directory of Open Access Journals (Sweden)

    Andry Chowanda

    2013-11-01

    opportunity for researchers in affective game with a more interactive game play as well as rich and complex story. Hopefully this will improve the user affective state and emotions in game. The results of this research imply that happy emotion obtains 78% of detection, meanwhile the anger emotion has the lowest detection of 44.4%. Moreover, users prefer mouse and FER (face expression recognition as the best input for this game.

  16. Cometary micrometeorites and input of prebiotic compounds

    OpenAIRE

    2014-01-01

    The apparition of life on the early Earth was probably favored by inputs of extraterrestrial matter brought by carbonaceous chondrite-like objects or cometary material. Interplanetary dust collected nowadays on Earth is related to carbonaceous chondrites and to cometary material. They contain in particular at least a few percent of organic matter, organic compounds (amino-acids, PAHs,…), hydrous silicates, and could have largely contributed to the budget of prebiotic matter on Earth, about 4 ...

  17. Emowars: Interactive Game Input Menggunakan Ekspresi Wajah

    Directory of Open Access Journals (Sweden)

    Andry Chowanda

    2013-12-01

    Full Text Available Research in the affective game has received attention from the research communities over this lustrum. As a crucial aspect of a game, emotions play an important role in user experience as well as to emphasize the user’s emotions state on game design. This will improve the user’s interactivity while they playing the game. This research aims to discuss and analyze whether emotions can replace traditional user game inputs (keyboard, mouse, and others. The methodology used in this research is divided into two main phases: game design and facial expression recognition. The results of this research indicate that users preferred to use a traditional input such as mouse. Moreover, user’s interactivities with game are still slightly low. However, this is a great opportunity for researchers in affective game with a more interactive game play as well as rich and complex story. Hopefully this will improve the user affective state and emotions in game. The results of this research imply that happy emotion obtains 78% of detection, meanwhile the anger emotion has the lowest detection of44.4%. Moreover, users prefer mouse and FER (face expression recognition as the best input for this game.

  18. Molecular structure input on the web

    Directory of Open Access Journals (Sweden)

    Ertl Peter

    2010-02-01

    Full Text Available Abstract A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential. The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.

  19. COMPETITION VERSUS COLLUSION: THE PARALLEL BEHAVIOUR IN THE ABSENCE OF THE SYMETRY ASSUMPTION

    Directory of Open Access Journals (Sweden)

    Romano Oana Maria

    2012-07-01

    Full Text Available Cartel detection is usually viewed as a key task of competition authorities. A special case of cartel is the parallel behaviour in terms of price selling. This type of behaviour is difficult to assess and its analysis has not always conclusive results. For evaluating such behaviour the data available are compared with theoretical values obtained by using a competitive or a collusive model. When different competitive or collusive models are considered, for the simplicity of calculations the economists use the symmetry assumption of costs and quantities produced / sold. This assumption has the disadvantage that the theoretical values obtained may deviate significantly from actual values (the real values on the market, which can sometimes lead to ambiguous results. The present paper analyses the parallel behaviour of economic agents in the absence of the symmetry assumption and study the identification of the model in this conditions.

  20. Assumptions and Axioms: Mathematical Structures to Describe the Physics of Rigid Bodies

    CERN Document Server

    Butler, Philip H; Renaud, Peter F

    2010-01-01

    This paper challenges some of the common assumptions underlying the mathematics used to describe the physical world. We start by reviewing many of the assumptions underlying the concepts of real, physical, rigid bodies and the translational and rotational properties of such rigid bodies. Nearly all elementary and advanced texts make physical assumptions that are subtly different from ours, and as a result we develop a mathematical description that is subtly different from the standard mathematical structure. Using the homogeneity and isotropy of space, we investigate the translational and rotational features of rigid bodies in two and three dimensions. We find that the concept of rigid bodies and the concept of the homogeneity of space are intrinsically linked. The geometric study of rotations of rigid objects leads to a geometric product relationship for lines and vectors. By requiring this product to be both associative and to satisfy Pythagoras' theorem, we obtain a choice of Clifford algebras. We extend o...

  1. Unraveling the photovoltaic technology learning curve by incorporation of input price changes and scale effects

    Energy Technology Data Exchange (ETDEWEB)

    Yu, C.F.; van Sark, W.G.J.H.M.; Alsema, E.A. [Department of Science, Technology and Society, Copernicus Institute for Sustainable Development and Innovation, Utrecht University, Heidelberglaan 2, 3584 CS Utrecht (Netherlands)

    2011-01-15

    In a large number of energy models, the use of learning curves for estimating technological improvements has become popular. This is based on the assumption that technological development can be monitored by following cost development as a function of market size. However, recent data show that in some stages of photovoltaic technology (PV) production, the market price of PV modules stabilizes even though the cumulative capacity increases. This implies that no technological improvement takes place in these periods: the cost predicted by the learning curve in the PV study is lower than the market one. We propose that this bias results from ignoring the effects of input prices and scale effects, and that incorporating the input prices and scale effects into the learning curve theory is an important issue in making cost predictions more reliable. In this paper, a methodology is described to incorporate the scale and input-prices effect as the additional variables into the one factor learning curve, which leads to the definition of the multi-factor learning curve. This multi-factor learning curve is not only derived from economic theories, but also supported by an empirical study. The results clearly show that input prices and scale effects are to be included, and that, although market prices are stabilizing, learning is still taking place. (author)

  2. Sensumotor transformation of input devices and the impact on practice and task difficulty.

    Science.gov (United States)

    Sutter, C

    2007-12-01

    In the present study, the usability of two laptop input devices, touchpad and trackpoint, is evaluated. The focus is set on the impact of sensumotor transformation of input devices on practice and task difficulty. Thirty novices and 14 experts operated either touchpad or trackpoint over a period of 1600 trials of a point-click task. As hypothesized, novices and experts operated the touchpad by 15% faster compared to the trackpoint. For novices, performance rose distinctly and levelled off after 960 trials. This consolidation occurred earlier than reported in literature (1400-1600 trials) and, contrary to the assumption, learning was similar for touchpad and trackpoint. The impact of task difficulty dropped remarkably by practice, which points at a more general than specific task learning. In conclusion, ergonomic guidelines can be derived for the user-specific optimization of the usage of touchpad and trackpoint. Actual and potential applications of this research include the user-specific optimization of laptop input devices. Within the theoretical framework of psychomotor models, a profound knowledge of user behaviour in human-computer interaction is provided. Ergonomic guidelines can be derived for the efficient usage of laptop input devices and an optimized hardware and software design.

  3. Characteristic operator functions for quantum input-plant-output models and coherent control

    Science.gov (United States)

    Gough, John E.

    2015-01-01

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entries that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.

  4. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  5. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    Science.gov (United States)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of

  6. cBrother: relaxing parental tree assumptions for Bayesian recombination detection.

    Science.gov (United States)

    Fang, Fang; Ding, Jing; Minin, Vladimir N; Suchard, Marc A; Dorman, Karin S

    2007-02-15

    Bayesian multiple change-point models accurately detect recombination in molecular sequence data. Previous Java-based implementations assume a fixed topology for the representative parental data. cBrother is a novel C language implementation that capitalizes on reduced computational time to relax the fixed tree assumption. We show that cBrother is 19 times faster than its predecessor and the fixed tree assumption can influence estimates of recombination in a medically-relevant dataset. cBrother can be freely downloaded from http://www.biomath.org/dormanks/ and can be compiled on Linux, Macintosh and Windows operating systems. Online documentation and a tutorial are also available at the site.

  7. Automatic ethics: the effects of implicit assumptions and contextual cues on moral behavior.

    Science.gov (United States)

    Reynolds, Scott J; Leavitt, Keith; DeCelles, Katherine A

    2010-07-01

    We empirically examine the reflexive or automatic aspects of moral decision making. To begin, we develop and validate a measure of an individual's implicit assumption regarding the inherent morality of business. Then, using an in-basket exercise, we demonstrate that an implicit assumption that business is inherently moral impacts day-to-day business decisions and interacts with contextual cues to shape moral behavior. Ultimately, we offer evidence supporting a characterization of employees as reflexive interactionists: moral agents whose automatic decision-making processes interact with the environment to shape their moral behavior.

  8. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Science.gov (United States)

    Fang, L.; Sun, X. Y.; Liu, Y. W.

    2016-12-01

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology.

  9. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  10. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    Science.gov (United States)

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  11. Validity of the Michaelis-Menten equation--steady-state or reactant stationary assumption: that is the question.

    Science.gov (United States)

    Schnell, Santiago

    2014-01-01

    The Michaelis-Menten equation is generally used to estimate the kinetic parameters, V and K(M), when the steady-state assumption is valid. Following a brief overview of the derivation of the Michaelis-Menten equation for the single-enzyme, single-substrate reaction, a critical review of the criteria for validity of the steady-state assumption is presented. The application of the steady-state assumption makes the implicit assumption that there is an initial transient during which the substrate concentration remains approximately constant, equal to the initial substrate concentration, while the enzyme-substrate complex concentration builds up. This implicit assumption is known as the reactant stationary assumption. This review presents evidence showing that the reactant stationary assumption is distinct from and independent of the steady-state assumption. Contrary to the widely believed notion that the Michaelis-Menten equation can always be applied under the steady-state assumption, the reactant stationary assumption is truly the necessary condition for validity of the Michaelis-Menten equation to estimate kinetic parameters. Therefore, the application of the Michaelis-Menten equation only leads to accurate estimation of kinetic parameters when it is used under experimental conditions meeting the reactant stationary assumption. The criterion for validity of the reactant stationary assumption does not require the restrictive condition of choosing a substrate concentration that is much higher than the enzyme concentration in initial rate experiments. © 2013 FEBS.

  12. Comparing the Performance of Approaches for Testing the Homogeneity of Variance Assumption in One-Factor ANOVA Models

    Science.gov (United States)

    Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.

    2017-01-01

    Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…

  13. The Importance of Input and Interaction in SLA

    Institute of Scientific and Technical Information of China (English)

    党春花

    2009-01-01

    As is known to us, input and interaction play the crucial roles in second language acquisition (SLA). Different linguistic schools have different explanations to input and interaction Behaviorist theories hold a view that input is composed of stimuli and response, putting more emphasis on the importance of input, while mentalist theories find input is a necessary condition to SLA, not a sufficient condition. At present, social interaction theories, which is one type of cognitive linguistics, suggests that besides input, interaction is also essential to language acquisition. Then, this essay will discuss how input and interaction result in SLA.

  14. Nonlinear control for systems containing input uncertainty via a Lyapunov-based approach

    Science.gov (United States)

    Mackunis, William

    Controllers are often designed based on the assumption that a control actuation can be directly applied to the system. This assumption may not be valid, however, for systems containing parametric input uncertainty or unmodeled actuator dynamics. In this dissertation, a tracking control methodology is proposed for aircaft and aerospace systems for which the corresponding dynamic models contain uncertainty in the control actuation. The dissertation will focus on five problems of interest: (1) adaptive CMG-actuated satellite attitude control in the presence of inertia uncertainty and uncertain CMG gimbal friction; (2) adaptive neural network (NN)-based satellite attitude control for CMG-actuated small-sats in the presence of uncertain satellite inertia, nonlinear disturbance torques, uncertain CMG gimbal friction, and nonlinear electromechanical CMG actuator disturbances; (3) dynamic inversion (DI) control for aircraft systems containing parametric input uncertainty and additive, nonlinearly parameterizable (non-LP) disturbances; (4) adaptive dynamic inversion (ADI) control for aircraft systems as described in (3); and (5) adaptive output feedback control for aircraft systems as described in (3) and (4).

  15. Cone inputs to murine striate cortex

    Directory of Open Access Journals (Sweden)

    Gouras Peter

    2008-11-01

    Full Text Available Abstract Background We have recorded responses from single neurons in murine visual cortex to determine the effectiveness of the input from the two murine cone photoreceptor mechanisms and whether there is any unique selectivity for cone inputs at this higher region of the visual system that would support the possibility of colour vision in mice. Each eye was stimulated by diffuse light, either 370 (strong stimulus for the ultra-violet (UV cone opsin or 505 nm (exclusively stimulating the middle wavelength sensitive (M cone opsin, obtained from light emitting diodes (LEDs in the presence of a strong adapting light that suppressed the responses of rods. Results Single cells responded to these diffuse stimuli in all areas of striate cortex. Two types of responsive cells were encountered. One type (135/323 – 42% had little to no spontaneous activity and responded at either the on and/or the off phase of the light stimulus with a few impulses often of relatively large amplitude. A second type (166/323 – 51% had spontaneous activity and responded tonically to light stimuli with impulses often of small amplitude. Most of the cells responded similarly to both spectral stimuli. A few (18/323 – 6% responded strongly or exclusively to one or the other spectral stimulus and rarely in a spectrally opponent manner. Conclusion Most cells in murine striate cortex receive excitatory inputs from both UV- and M-cones. A small fraction shows either strong selectivity for one or the other cone mechanism and occasionally cone opponent responses. Cells that could underlie chromatic contrast detection are present but extremely rare in murine striate cortex.

  16. Comparison between Input Hypothesis and Interaction Hypothesis

    Institute of Scientific and Technical Information of China (English)

    李佳

    2012-01-01

      Krashen’s Input hypothesis and Long’s Interaction hypothesis are both valuable research results in the field of language acquisition and play a significant role in language teaching and learning instruction. Through comparing them, their similarities lie in same goal and basis, same focus on comprehension and same challenge the traditional teaching concept. While the differences lie in Different ways to make exposure comprehensible and different roles that learners play. It is meaningful to make the compari⁃son because the results can be valuable guidance and highlights for language teachers and learners to teach or acquire a new lan⁃guage more efficiently.

  17. LISTENING COMPREHENSION: MORE THAN JUST COMPREHENSIBLE INPUT

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    In the ten years since the publication of Krashen’s theory on second language acquisition (SLA), the role of comprehensible input (CI) in the learning/acquiring of a language has received considerable attention (Krashen, 1982, 1985; Ellis, 1991, 1992; Long, 1983, 1985). As a result of these studies researchers now agree on the following points. Exposure to a language does not lead to acquisition; the personal accounts of so many language learners who have spent many years in a country or who have listened to endless hours of radio and television without being able to understand or speak the language attest to this fact.

  18. Operational modal analysis with non stationnary inputs

    OpenAIRE

    Gouache, Thibault; Morlier, Joseph; Michon, Guilhem; Coulange, Baptiste

    2013-01-01

    Operational modal analysis (OMA) techniques enable the use of in-situ and uncontrolled vibrations to be used to lead modal analysis of structures. In reality operational vibrations are a combination of numerous excitations sources that are much more complex than a random white noise or a harmonic. Numerous OMA techniques exist like SSI, NExT, FDD and BSS. All these methods are based on the fundamental hypothesis that the input or force applied to the structure to be analyzed is a stationary w...

  19. TSM control of the delayed input system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The paper proposed a terminal sliding mode control method for the delayed input system with uncertainties. Firstly, through the state transformation, the original system was transformed into the non-delayed controllable canonical form system. Then the paper designed a terminal sliding mode and terminal sliding control law with Lyapunov method for the transformed system. Through the method, the reaching time of the any initial state and the convergencing time to the equilibrium points are constrained in finite time. The simulation results show the validation of the method.

  20. Intelligent Graph Layout Using Many Users' Input.

    Science.gov (United States)

    Yuan, Xiaoru; Che, Limei; Hu, Yifan; Zhang, Xin

    2012-12-01

    In this paper, we propose a new strategy for graph drawing utilizing layouts of many sub-graphs supplied by a large group of people in a crowd sourcing manner. We developed an algorithm based on Laplacian constrained distance embedding to merge subgraphs submitted by different users, while attempting to maintain the topological information of the individual input layouts. To facilitate collection of layouts from many people, a light-weight interactive system has been designed to enable convenient dynamic viewing, modification and traversing between layouts. Compared with other existing graph layout algorithms, our approach can achieve more aesthetic and meaningful layouts with high user preference.

  1. Flexible input, dazzling output with IBM i

    CERN Document Server

    Victória-Pereira, Rafael

    2014-01-01

    Link your IBM i system to the modern business server world! This book presents easier and more flexible ways to get data into your IBM i system, along with rather surprising methods to export and present the vital business data it contains. You'll learn how to automate file transfers, seamlessly connect PC applications with your RPG programs, and much more. Input operations will become more flexible and user-proof, with self-correcting import processes and direct file transfers that require a minimum of user intervention. Also learn novel ways to present information: your DB2 data will look gr

  2. Input-output-controlled nonlinear equation solvers

    Science.gov (United States)

    Padovan, Joseph

    1988-01-01

    To upgrade the efficiency and stability of the successive substitution (SS) and Newton-Raphson (NR) schemes, the concept of input-output-controlled solvers (IOCS) is introduced. By employing the formal properties of the constrained version of the SS and NR schemes, the IOCS algorithm can handle indefiniteness of the system Jacobian, can maintain iterate monotonicity, and provide for separate control of load incrementation and iterate excursions, as well as having other features. To illustrate the algorithmic properties, the results for several benchmark examples are presented. These define the associated numerical efficiency and stability of the IOCS.

  3. Example of input-output analysis

    Science.gov (United States)

    1975-01-01

    The thirty sectors included in the ECASTAR energy input-output model were listed. Five of these belong to energy producing sectors, fifteen to manufacturing industries, two to residential and commercial sectors, and eight to service industries. The model is capable of tracing impacts of an action in three dimensions: dollars, BTU's of energy, and labor. Four conservation actions considered were listed and then discussed separately, dealing with the following areas: increase in fuel efficiency, reduction in fuel used by the transportation and warehousing group, manufacturing of smaller automobiles, and a communications/transportation trade-off.

  4. Input data to run Landis-II

    Science.gov (United States)

    DeJager, Nathan R.

    2017-01-01

    The data are input data files to run the forest simulation model Landis-II for Isle Royale National Park. Files include: a) Initial_Comm, which includes the location of each mapcode, b) Cohort_ages, which includes the ages for each tree species-cohort within each mapcode, c) Ecoregions, which consist of different regions of soils and climate, d) Ecoregion_codes, which define the ecoregions, and e) Species_Params, which link the potential establishment and growth rates for each species with each ecoregion.

  5. Culture Input in Foreign Language Teaching

    Institute of Scientific and Technical Information of China (English)

    胡晶

    2009-01-01

    Language and culture are highly interrelated, that is to say, language is not only the carrier of culture but it is also restricted by culture. Therefore, foreign language teaching aiming at cultivate students' intercultural communication should take culture differences into consideration. In this paper, the relationship between language and culture will be discussed. Then I will illustrate the importance of intercultural communication. Finally, according to the present situation of foreign language teaching in China, several strategies for cultural input in and out of class will be suggested.

  6. Approximate input physics for stellar modelling

    CERN Document Server

    Pols, O R; Eggleton, P P; Han, Z; Pols, O R; Tout, C A; Eggleton, P P; Han, Z

    1995-01-01

    We present a simple and efficient, yet reasonably accurate, equation of state, which at the moderately low temperatures and high densities found in the interiors of stars less massive than the Sun is substantially more accurate than its predecessor by Eggleton, Faulkner & Flannery. Along with the most recently available values in tabular form of opacities, neutrino loss rates, and nuclear reaction rates for a selection of the most important reactions, this provides a convenient package of input physics for stellar modelling. We briefly discuss a few results obtained with the updated stellar evolution code.

  7. Conceptualizing Identity Development: Unmasking the Assumptions within Inventories Measuring Identity Development

    Science.gov (United States)

    Moran, Christy D.

    2009-01-01

    The purpose of this qualitative research was to analyze the dimensions and manifestations of identity development embedded within commonly used instruments measuring student identity development. To this end, a content analysis of ten identity assessment tools was conducted to determine the assumptions about identity development contained therein.…

  8. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...

  9. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid

  10. Vocational Didactics: Core Assumptions and Approaches from Denmark, Germany, Norway, Spain and Sweden

    Science.gov (United States)

    Gessler, Michael; Moreno Herrera, Lázaro

    2015-01-01

    The design of vocational didactics has to meet special requirements. Six core assumptions are identified: outcome orientation, cultural-historical embedding, horizontal structure, vertical structure, temporal structure, and the changing nature of work. Different approaches and discussions from school-based systems (Spain and Sweden) and dual…

  11. Mutual assumptions and facts about nondisclosure among clinical supervisors and students in group supervision

    DEFF Research Database (Denmark)

    Nielsen, Geir Høstmark; Skjerve, Jan; Jacobsen, Claus Haugaard;

    2009-01-01

    In the two preceding papers of this issue of Nordic Psychology the authors report findings from a study of nondisclosure among student therapists and clinical supervisors. The findings were reported separately for each group. In this article, the two sets of findings are held together and compared......, so as to draw a picture of mutual assumptions and facts about nondisclosure among students and supervisors....

  12. Experimental evaluation of the pure configurational stress assumption in the flow dynamics of entangled polymer melts

    DEFF Research Database (Denmark)

    Rasmussen, Henrik K.; Bejenariu, Anca Gabriela; Hassager, Ole

    2010-01-01

    with the assumption of pure configurational stress was accurately able to predict the startup as well as the reversed flow behavior. This confirms that this commonly used theoretical picture for the flow of polymeric liquids is a correct physical principle to apply. c 2010 The Society of Rheology. [DOI: 10.1122/1.3496378]...

  13. Complex Learning Theory--Its Epistemology and Its Assumptions about Learning: Implications for Physical Education

    Science.gov (United States)

    Light, Richard

    2008-01-01

    Davis and Sumara (2003) argue that differences between commonsense assumptions about learning and those upon which constructivism rests present a significant challenge for the fostering of constructivist approaches to teaching in schools. Indeed, as Rink (2001) suggests, initiating any change process for teaching method needs to involve some…

  14. 76 FR 17158 - Assumption Buster Workshop: Distributed Data Schemes Provide Security

    Science.gov (United States)

    2011-03-28

    ... group that coordinates cyber security research activities in support of national security systems, is...: There is a strong and often repeated call for research to provide novel cyber security solutions. The... capable, and that re-examining cyber security solutions in the context of these assumptions will result in...

  15. Kinematic and static assumptions for homogenization in micromechanics of granular materials

    NARCIS (Netherlands)

    Kruyt, N.P.; Rothenburg, L.

    2004-01-01

    A study is made of kinematic and static assumptions for homogenization in micromechanics of granular materials for two cases. The first case considered deals with the elastic behaviour of isotropic, two-dimensional assemblies with bonded contacts. Using a minimum potential energy principle and estim

  16. Is a "Complex" Task Really Complex? Validating the Assumption of Cognitive Task Complexity

    Science.gov (United States)

    Sasayama, Shoko

    2016-01-01

    In research on task-based learning and teaching, it has traditionally been assumed that differing degrees of cognitive task complexity can be inferred through task design and/or observations of differing qualities in linguistic production elicited by second language (L2) communication tasks. Without validating this assumption, however, it is…

  17. How Do People Learn at the Workplace? Investigating Four Workplace Learning Assumptions

    NARCIS (Netherlands)

    Kooken, Jose; Ley, Tobias; Hoog, de Robert; Duval, Erik; Klamma, Ralf

    2007-01-01

    Any software development project is based on assumptions about the state of the world that probably will hold when it is fielded. Investigating whether they are true can be seen as an important task. This paper describes how an empirical investigation was designed and conducted for the EU funded APO

  18. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    Science.gov (United States)

    Reardon, Sean F.; Raudenbush, Stephen W.

    2013-01-01

    The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…

  19. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid mediu

  20. Post Stereotypes: Deconstructing Racial Assumptions and Biases through Visual Culture and Confrontational Pedagogy

    Science.gov (United States)

    Jung, Yuha

    2015-01-01

    The Post Stereotypes project embodies confrontational pedagogy and involves postcard artmaking designed to both solicit expression of and deconstruct students' racial, ethnic, and cultural stereotypes and assumptions. As part of the Cultural Diversity in American Art course, students created postcard art that visually represented their personal…

  1. Credit Transfer amongst Students in Contrasting Disciplines: Examining Assumptions about Wastage, Mobility and Lifelong Learning

    Science.gov (United States)

    Di Paolo, Terry; Pegg, Ann

    2013-01-01

    While arrangements for credit transfer exist across the UK higher education sector, little is known about credit-transfer students or why they re-engage with study. Policy makers have cited credit transfer as a mechanism for reducing wastage and drop-out, but this paper challenges this assumption and instead examines how credit transfer serves…

  2. Net Generation at Social Software: Challenging Assumptions, Clarifying Relationships and Raising Implications for Learning

    Science.gov (United States)

    Valtonen, Teemu; Dillon, Patrick; Hacklin, Stina; Vaisanen, Pertti

    2010-01-01

    This paper takes as its starting point assumptions about use of information and communication technology (ICT) by people born after 1983, the so called net generation. The focus of the paper is on social networking. A questionnaire survey was carried out with 1070 students from schools in Eastern Finland. Data are presented on students' ICT-skills…

  3. An Algorithm for Determining Database Consistency Under the Coles World Assumption

    Institute of Scientific and Technical Information of China (English)

    沈一栋

    1992-01-01

    It is well-known that there are circumstances where applying Reiter's closed world assumption(CWA)will lead to logical inconsistencies.In this paper,a new characterization of the CA consistency is pesented and an algorithm is proposed for determining whether a datalase without function symbols is consistent with the CWA.The algorithm is shown to be efficient.

  4. Tests of the frozen-flux and tangentially geostrophic assumptions using magnetic satellite data

    DEFF Research Database (Denmark)

    Chulliat, A.; Olsen, Nils; Sabaka, T.

    In 1984, Jean-Louis Le Mouël published a paper suggesting that the flow at the top of the Earth’s core is tangentially geostrophic, i.e., the Lorentz force is much smaller than the Coriolis force in this particular region of the core. This new assumption wassubsequently used to discriminate among...

  5. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    Science.gov (United States)

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  6. H-INFINITY-OPTIMIZATION WITHOUT ASSUMPTIONS ON FINITE OR INFINITE ZEROS

    NARCIS (Netherlands)

    SCHERER, C

    1992-01-01

    Explicit algebraic conditions are presented for the suboptimality of some parameter in the H(infinity)-optimization problem by output measurement control. Apart from two strict properness conditions, no artificial assumptions restrict the underlying system. In particular, the plant may have zeros on

  7. A critical assessment of the ecological assumptions underpinning compensatory mitigation of salmon-derived nutrients

    Science.gov (United States)

    Collins, Scott F.; Marcarelli, Amy M.; Baxter, Colden V.; Wipfli, Mark S.

    2015-01-01

    We critically evaluate some of the key ecological assumptions underpinning the use of nutrient replacement as a means of recovering salmon populations and a range of other organisms thought to be linked to productive salmon runs. These assumptions include: (1) nutrient mitigation mimics the ecological roles of salmon, (2) mitigation is needed to replace salmon-derived nutrients and stimulate primary and invertebrate production in streams, and (3) food resources in rearing habitats limit populations of salmon and resident fishes. First, we call into question assumption one because an array of evidence points to the multi-faceted role played by spawning salmon, including disturbance via redd-building, nutrient recycling by live fish, and consumption by terrestrial consumers. Second, we show that assumption two may require qualification based upon a more complete understanding of nutrient cycling and productivity in streams. Third, we evaluate the empirical evidence supporting food limitation of fish populations and conclude it has been only weakly tested. On the basis of this assessment, we urge caution in the application of nutrient mitigation as a management tool. Although applications of nutrients and other materials intended to mitigate for lost or diminished runs of Pacific salmon may trigger ecological responses within treated ecosystems, contributions of these activities toward actual mitigation may be limited.

  8. HIERARCHICAL STRUCTURE IN ADL AND IADL - ANALYTICAL ASSUMPTIONS AND APPLICATIONS FOR CLINICIAN AND RESEARCHERS

    NARCIS (Netherlands)

    KEMPEN, GIJM; MYERS, AM; POWELL, LE

    1995-01-01

    The results of a Canadian study have shown that a set of 12 (I)ADL items did not meet the criteria of Guttman's scalogram program, questioning the assumption of hierarchical ordering. In this article, the hierarchical structure of (I)ADL items from the Canadian elderly sample is retested with anothe

  9. The Impact of Feedback Frequency on Learning and Task Performance: Challenging the "More Is Better" Assumption

    Science.gov (United States)

    Lam, Chak Fu; DeRue, D. Scott; Karam, Elizabeth P.; Hollenbeck, John R.

    2011-01-01

    Previous research on feedback frequency suggests that more frequent feedback improves learning and task performance (Salmoni, Schmidt, & Walter, 1984). Drawing from resource allocation theory (Kanfer & Ackerman, 1989), we challenge the "more is better" assumption and propose that frequent feedback can overwhelm an individual's cognitive resource…

  10. The Mediating Effect of World Assumptions on the Relationship between Trauma Exposure and Depression

    Science.gov (United States)

    Lilly, Michelle M.; Valdez, Christine E.; Graham-Bermann, Sandra A.

    2011-01-01

    The association between trauma exposure and mental health-related challenges such as depression are well documented in the research literature. The assumptive world theory was used to explore this relationship in 97 female survivors of intimate partner violence (IPV). Participants completed self-report questionnaires that assessed trauma history,…

  11. Post Stereotypes: Deconstructing Racial Assumptions and Biases through Visual Culture and Confrontational Pedagogy

    Science.gov (United States)

    Jung, Yuha

    2015-01-01

    The Post Stereotypes project embodies confrontational pedagogy and involves postcard artmaking designed to both solicit expression of and deconstruct students' racial, ethnic, and cultural stereotypes and assumptions. As part of the Cultural Diversity in American Art course, students created postcard art that visually represented their personal…

  12. 76 FR 22925 - Assumption Buster Workshop: Abnormal Behavior Detection Finds Malicious Actors

    Science.gov (United States)

    2011-04-25

    ... Assumption Buster Workshop: Abnormal Behavior Detection Finds Malicious Actors AGENCY: The National... assumptionbusters@nitrd.gov . Travel expenses will be paid at the government rate for selected participants who live... behavioral models to monitor the size and destinations of financial transfers, and/or on-line...

  13. World assumptions, religiosity, and PTSD in survivors of intimate partner violence.

    Science.gov (United States)

    Lilly, Michelle M; Howell, Kathryn H; Graham-Bermann, Sandra

    2015-01-01

    Intimate partner violence (IPV) is among the most frequent types of violence annually affecting women. One frequent outcome of violence exposure is posttraumatic stress disorder (PTSD). The theory of shattered world assumptions represents one possible explanation for adverse mental health outcomes following trauma, contending that trauma disintegrates individuals' core assumptions that the world is safe and meaningful, and that the self is worthy. Research that explores world assumptions in relationship to survivors of IPV has remained absent. A more consistent finding in research on IPV suggests that religiosity is strongly associated with survivors' reactions to, and recovery from, IPV. The present study found that world assumptions was a significant mediator of the relationship between IPV exposure and PTSD symptoms. Religiosity was also significantly, positively related to PTSD symptoms, but was not significantly related to amount of IPV exposure. Though African American women reported more IPV exposure and greater religiosity than European American women in the sample, there were no interethnic differences in PTSD symptom endorsement. Implications of these findings are discussed.

  14. The Impact of Feedback Frequency on Learning and Task Performance: Challenging the "More Is Better" Assumption

    Science.gov (United States)

    Lam, Chak Fu; DeRue, D. Scott; Karam, Elizabeth P.; Hollenbeck, John R.

    2011-01-01

    Previous research on feedback frequency suggests that more frequent feedback improves learning and task performance (Salmoni, Schmidt, & Walter, 1984). Drawing from resource allocation theory (Kanfer & Ackerman, 1989), we challenge the "more is better" assumption and propose that frequent feedback can overwhelm an individual's cognitive resource…

  15. Challenging Assumptions about Values, Interests and Power in Further and Higher Education Partnerships

    Science.gov (United States)

    Elliott, Geoffrey

    2017-01-01

    This article raises questions that challenge assumptions about values, interests and power in further and higher education partnerships. These topics were explored in a series of semi-structured interviews with a sample of principals and senior higher education partnership managers of colleges spread across a single region in England. The data…

  16. Comparison of Three Common Experimental Designs to Improve Statistical Power When Data Violate Parametric Assumptions.

    Science.gov (United States)

    Porter, Andrew C.; McSweeney, Maryellen

    A Monte Carlo technique was used to investigate the small sample goodness of fit and statistical power of several nonparametric tests and their parametric analogues when applied to data which violate parametric assumptions. The motivation was to facilitate choice among three designs, simple random assignment with and without a concomitant variable…

  17. Exploring Epistemologies: Social Work Action as a Reflection of Philosophical Assumptions.

    Science.gov (United States)

    Dean, Ruth G.; Fenby, Barbara L.

    1989-01-01

    Two major philosophical assumptions underlying the literature, practice, and teaching of social work are reviewed: empiricism and existentialism. Two newer theoretical positions, critical theory and deconstruction, are also introduced. The implications for using each position as a context for teaching are considered. (MSE)

  18. Monitoring long-lasting insecticidal net (LLIN) durability to validate net serviceable life assumptions, in Rwanda

    NARCIS (Netherlands)

    Hakizimana, E.; Cyubahiro, B.; Rukundo, A.; Kabayiza, A.; Mutabazi, A.; Beach, R.; Patel, R.; Tongren, J.E.; Karema, C.

    2014-01-01

    Background To validate assumptions about the length of the distribution–replacement cycle for long-lasting insecticidal nets (LLINs) in Rwanda, the Malaria and other Parasitic Diseases Division, Rwanda Ministry of Health, used World Health Organization methods to independently confirm the three-year

  19. The National Teacher Corps: A Study of Shifting Goals and Changing Assumptions

    Science.gov (United States)

    Eckert, Sarah Anne

    2011-01-01

    This article investigates the lasting legacy of the National Teacher Corps (NTC), which was created in 1965 by the U.S. federal government with two crucial assumptions: that teaching poor urban children required a very specific skill set and that teacher preparation programs were not providing adequate training in these skills. Analysis reveals…

  20. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    Science.gov (United States)

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  1. Principle Assumption of Space Object Detection Using Shipborne Great Aperture Photoelectrical Theodolite

    Science.gov (United States)

    Ouyang, Jia; Zhang, Tong-shuang; Wang, Qian-xue

    2016-02-01

    In this paper the use of space object detection is introduced. By analyzing the research actuality of space object detection using photoelectrical equipment, a shipborne great aperture photoelectrical theodolite is designed. The principle assumption of space object detection using shipborne great aperture photoelectrical theodolite is put forward.

  2. The Mediating Effect of World Assumptions on the Relationship between Trauma Exposure and Depression

    Science.gov (United States)

    Lilly, Michelle M.; Valdez, Christine E.; Graham-Bermann, Sandra A.

    2011-01-01

    The association between trauma exposure and mental health-related challenges such as depression are well documented in the research literature. The assumptive world theory was used to explore this relationship in 97 female survivors of intimate partner violence (IPV). Participants completed self-report questionnaires that assessed trauma history,…

  3. The Effects of Pre Modified Input, Interactionally Modified Input, and Modified Output on EFL Learners' Comprehension of New Vocabularies

    Science.gov (United States)

    Maleki, Zinat; Pazhakh, AbdolReza

    2012-01-01

    The present study was an attempt to investigate the effects of premodified input, interactionally modified input and modified output on 80 EFL learners' comprehension of new words. The subjects were randomly assigned into four groups of pre modified input, interactionally modified input, modified output and unmodified (control) groups. Each group…

  4. On assumption in low-altitude investigation of dayside magnetospheric phenomena

    Science.gov (United States)

    Koskinen, H. E. J.

    In the physics of large-scale phenomena in complicated media, such as space plasmas, the chain of reasoning from the fundamental physics to conceptual models is a long and winding road, requiring much physical insight and reliance on various assumptions and approximations. The low-altitude investigation of dayside phenomena provides numerous examples of problems arising from the necessity to make strong assumptions. In this paper we discuss some important assumptions that are either unavoidable or at least widely used. Two examples are the concepts of frozen-in field lines and convection velocity. Instead of asking what violates the frozen-in condition, it is quite legitimate to ask what freezes the plasma and the magnetic field in the first place. Another important complex of problems are the limitations introduced by a two-dimensional approach or linearization of equations. Although modern research is more and more moving toward three-dimensional and time-dependent models, limitations in computing power often make a two-dimensional approach tempting. In a similar way, linearization makes equations analytically tractable. Finally, a very central question is the mapping. In the first approximation, the entire dayside magnetopause maps down to the ionosphere through the dayside cusp region. From the mapping viewpoint, the cusp is one of the most difficult regions and assumptions needed to perform the mapping in practice must be considered with the greatest possible care. We can never avoid assumptions but we must always make them clear to ourselves and also to the readers of our papers.

  5. Cometary micrometeorites and input of prebiotic compounds

    Directory of Open Access Journals (Sweden)

    Engrand C.

    2014-02-01

    Full Text Available The apparition of life on the early Earth was probably favored by inputs of extraterrestrial matter brought by carbonaceous chondrite-like objects or cometary material. Interplanetary dust collected nowadays on Earth is related to carbonaceous chondrites and to cometary material. They contain in particular at least a few percent of organic matter, organic compounds (amino-acids, PAHs,…, hydrous silicates, and could have largely contributed to the budget of prebiotic matter on Earth, about 4 Ga ago. A new population of cometary dust was recently discovered in the Concordia Antarctic micrometeorite collection. These “Ultracarbonaceous Antarctic Micrometeorites” (UCAMMs are dominated by deuterium-rich and nitrogen-rich organic matter. They seem related to the “CHON” grains identified in the comet Halley in 1986. Although rare in the micrometeorites flux (<5% of the micrometeorites, UCAMMs could have significantly contributed to the input of prebiotic matter. Their content in soluble organic matter is currently under study.

  6. Processing in (linear) systems with stochastic input

    Science.gov (United States)

    Nutu, Catalin Silviu; Axinte, Tiberiu

    2016-12-01

    The paper is providing a different approach to real-world systems, such as micro and macro systems of our real life, where the man has little or no influence on the system, either not knowing the rules of the respective system or not knowing the input of the system, being thus mainly only spectator of the system's output. In such a system, the input of the system and the laws ruling the system could be only "guessed", based on intuition or previous knowledge of the analyzer of the respective system. But, as we will see in the paper, it exists also another, more theoretical and hence scientific way to approach the matter of the real-world systems, and this approach is mostly based on the theory related to Schrödinger's equation and the wave function associated with it and quantum mechanics as well. The main results of the paper are regarding the utilization of the Schrödinger's equation and related theory but also of the Quantum mechanics, in modeling real-life and real-world systems.

  7. Ground motion input in seismic evaluation studies

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, R.T.; Wu, S.C.

    1996-07-01

    This report documents research pertaining to conservatism and variability in seismic risk estimates. Specifically, it examines whether or not artificial motions produce unrealistic evaluation demands, i.e., demands significantly inconsistent with those expected from real earthquake motions. To study these issues, two types of artificial motions are considered: (a) motions with smooth response spectra, and (b) motions with realistic variations in spectral amplitude across vibration frequency. For both types of artificial motion, time histories are generated to match target spectral shapes. For comparison, empirical motions representative of those that might result from strong earthquakes in the Eastern U.S. are also considered. The study findings suggest that artificial motions resulting from typical simulation approaches (aimed at matching a given target spectrum) are generally adequate and appropriate in representing the peak-response demands that may be induced in linear structures and equipment responding to real earthquake motions. Also, given similar input Fourier energies at high-frequencies, levels of input Fourier energy at low frequencies observed for artificial motions are substantially similar to those levels noted in real earthquake motions. In addition, the study reveals specific problems resulting from the application of Western U.S. type motions for seismic evaluation of Eastern U.S. nuclear power plants.

  8. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  9. Multiple-Input Multiple-Output (MIMO Linear Systems Extreme Inputs/Outputs

    Directory of Open Access Journals (Sweden)

    David O. Smallwood

    2007-01-01

    Full Text Available A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one will result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.

  10. Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.

    Science.gov (United States)

    Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter

    2016-04-01

    Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery.

  11. Common-sense chemistry: The use of assumptions and heuristics in problem solving

    Science.gov (United States)

    Maeyer, Jenine Rachel

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build predictions and make decisions). A better understanding and characterization of these constraints are of central importance in the development of curriculum and teaching strategies that better support student learning in science. It was the overall goal of this thesis to investigate student reasoning in chemistry, specifically to better understand and characterize the assumptions and heuristics used by undergraduate chemistry students. To achieve this, two mixed-methods studies were conducted, each with quantitative data collected using a questionnaire and qualitative data gathered through semi-structured interviews. The first project investigated the reasoning heuristics used when ranking chemical substances based on the relative value of a physical or chemical property, while the second study characterized the assumptions and heuristics used when making predictions about the relative likelihood of different types of chemical processes. Our results revealed that heuristics for cue selection and decision-making played a significant role in the construction of answers during the interviews. Many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision-making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge, but often led students astray. When characterizing assumptions, our results indicate that students relied on intuitive, spurious, and valid assumptions about the nature of chemical substances and processes in building their responses. In particular, many

  12. Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies

    Energy Technology Data Exchange (ETDEWEB)

    Stoll, Brady [National Renewable Energy Lab. (NREL), Golden, CO (United States); Brinkman, Gregory [National Renewable Energy Lab. (NREL), Golden, CO (United States); Townsend, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bloom, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-01

    Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0

  13. Analysis on relation between safety input and accidents

    Institute of Scientific and Technical Information of China (English)

    YAO Qing-guo; ZHANG Xue-mu; LI Chun-hui

    2007-01-01

    The number of safety input directly determines the level of safety, and there exists dialectical and unified relations between safety input and accidents. Based on the field investigation and reliable data, this paper deeply studied the dialectical relationship between safety input and accidents, and acquired the conclusions. The security situation of the coal enterprises was related to the security input rate, being effected little by the security input scale, and build the relationship model between safety input and accidents on this basis, that is the accident model.

  14. Auto Draw from Excel Input Files

    Science.gov (United States)

    Strauss, Karl F.; Goullioud, Renaud; Cox, Brian; Grimes, James M.

    2011-01-01

    The design process often involves the use of Excel files during project development. To facilitate communications of the information in the Excel files, drawings are often generated. During the design process, the Excel files are updated often to reflect new input. The problem is that the drawings often lag the updates, often leading to confusion of the current state of the design. The use of this program allows visualization of complex data in a format that is more easily understandable than pages of numbers. Because the graphical output can be updated automatically, the manual labor of diagram drawing can be eliminated. The more frequent update of system diagrams can reduce confusion and reduce errors and is likely to uncover symmetric problems earlier in the design cycle, thus reducing rework and redesign.

  15. Distribution Development for STORM Ingestion Input Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e-4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e-4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)

  16. Assumptions and Criteria for Performing a Feasability Study of the Conversion of the High Flux Isotope Reactor Core to Use Low-Enriched Uranium Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Primm, R.T., III; Ellis, R.J.; Gehin, J.C.; Moses, D.L.; Binder, J.L.; Xoubi, N. (U. of Cincinnati)

    2006-02-01

    A computational study will be initiated during fiscal year 2006 to examine the feasibility of converting the High Flux Isotope Reactor from highly enriched uranium fuel to low-enriched uranium. The study will be limited to steady-state, nominal operation, reactor physics and thermal-hydraulic analyses of a uranium-molybdenum alloy that would be substituted for the current fuel powder--U{sub 3}O{sub 8} mixed with aluminum. The purposes of this document are to (1) define the scope of studies to be conducted, (2) define the methodologies to be used to conduct the studies, (3) define the assumptions that serve as input to the methodologies, (4) provide an efficient means for communication with the Department of Energy and American research reactor operators, and (5) expedite review and commentary by those parties.

  17. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Innovation or 'Inventions'? The conflict between latent assumptions in marine aquaculture and local fishery.

    Science.gov (United States)

    Martínez-Novo, Rodrigo; Lizcano, Emmánuel; Herrera-Racionero, Paloma; Miret-Pastor, Lluís

    2016-06-01

    Recent European policy highlights the need to promote local fishery and aquaculture by means of innovation and joint participation in fishery management as one of the keys to achieve the sustainability of our seas. However, the implicit assumptions held by the actors in the two main groups involved - innovators (scientists, businessmen and administration managers) and local fishermen - can complicate, perhaps even render impossible, mutual understanding and co-operation. A qualitative analysis of interviews with members of both groups in the Valencian Community (Spain) reveals those latent assumptions and their impact on the respective practices. The analysis shows that the innovation narrative in which one group is based and the inventions narrative used by the other one are rooted in two dramatically different, or even antagonistic, collective worldviews. Any environmental policy that implies these groups should take into account these strong discords.

  19. IRT models with relaxed assumptions in eRm: A manual-like instruction

    Directory of Open Access Journals (Sweden)

    REINHOLD HATZINGER

    2009-03-01

    Full Text Available Linear logistic models with relaxed assumptions (LLRA as introduced by Fischer (1974 are a flexible tool for the measurement of change for dichotomous or polytomous responses. As opposed to the Rasch model, assumptions on dimensionality of items, their mutual dependencies and the distribution of the latent trait in the population of subjects are relaxed. Conditional maximum likelihood estimation allows for inference about treatment, covariate or trend effect parameters without taking the subjects' latent trait values into account. In this paper we will show how LLRAs based on the LLTM, LRSM and LPCM can be used to answer various questions about the measurement of change and how they can be fitted in R using the eRm package. A number of small didactic examples is provided that can easily be used as templates for real data sets. All datafiles used in this paper are available from http://eRm.R-Forge.R-project.org/

  20. THE HISTORY OF BUILDING THE NORTHERN FRATERNAL CELLS OF VIRGIN MARY ASSUMPTION MONASTERY IN TIKHVIN

    Directory of Open Access Journals (Sweden)

    Tatiana Nikolaevna PYATNITSKAYA

    2014-01-01

    Full Text Available The article is focused on the formation of one of the fra-ternal houses of the Virgin Mary Assumption Monastery in Tikhvin (Leningrad region, the volume-spatial compo-sition of which was developed during the second half of the 17th century. It describes the history of the complex origin around the Assumption Cathedral of the 16th cen-tury and Cell housing location in the wooden and stone ensembles. Comparing the archival documents and the data obtained as a result of field studies, were identified the initial planning and design features of the Nordic fraternal cells. The research identified brigades of Tikhvin masons of 1680-1690 who worked in the construction of the building. Fragments of the original architectural dec-orations and facade colors were found. The research also identified graphic reconstructions, giving an idea not only of the original appearance of the building, but also the history of its changes.

  1. Error in the description of foot kinematics due to violation of rigid body assumptions.

    Science.gov (United States)

    Nester, C J; Liu, A M; Ward, E; Howard, D; Cocheba, J; Derrick, T

    2010-03-03

    Kinematic data from rigid segment foot models inevitably includes errors because the bones within each segment move relative to each other. This study sought to define error in foot kinematic data due to violation of the rigid segment assumption. The research compared kinematic data from 17 different mid and forefoot rigid segment models to kinematic data of the individual bones comprising these segments. Kinematic data from a previous dynamic cadaver model study was used to derive individual bone as well as foot segment kinematics. Mean and maximum errors due to violation of the rigid body assumption varied greatly between models. The model with least error was the combination of navicular and cuboid (mean errors kinematics research study being undertaken.

  2. Load assumption for fatigue design of structures and components counting methods, safety aspects, practical application

    CERN Document Server

    Köhler, Michael; Pötter, Kurt; Zenner, Harald

    2017-01-01

    Understanding the fatigue behaviour of structural components under variable load amplitude is an essential prerequisite for safe and reliable light-weight design. For designing and dimensioning, the expected stress (load) is compared with the capacity to withstand loads (fatigue strength). In this process, the safety necessary for each particular application must be ensured. A prerequisite for ensuring the required fatigue strength is a reliable load assumption. The authors describe the transformation of the stress- and load-time functions which have been measured under operational conditions to spectra or matrices with the application of counting methods. The aspects which must be considered for ensuring a reliable load assumption for designing and dimensioning are discussed in detail. Furthermore, the theoretical background for estimating the fatigue life of structural components is explained, and the procedures are discussed for numerous applications in practice. One of the prime intentions of the authors ...

  3. Investigation of assumptions underlying current safety guidelines on EM-induced nerve stimulation

    Science.gov (United States)

    Neufeld, Esra; Vogiatzis Oikonomidis, Ioannis; Iacono, Maria Ida; Angelone, Leonardo M.; Kainz, Wolfgang; Kuster, Niels

    2016-06-01

    An intricate network of a variety of nerves is embedded within the complex anatomy of the human body. Although nerves are shielded from unwanted excitation, they can still be stimulated by external electromagnetic sources that induce strongly non-uniform field distributions. Current exposure safety standards designed to limit unwanted nerve stimulation are based on a series of explicit and implicit assumptions and simplifications. This paper demonstrates the applicability of functionalized anatomical phantoms with integrated coupled electromagnetic and neuronal dynamics solvers for investigating the impact of magnetic resonance exposure on nerve excitation within the full complexity of the human anatomy. The impact of neuronal dynamics models, temperature and local hot-spots, nerve trajectory and potential smoothing, anatomical inhomogeneity, and pulse duration on nerve stimulation was evaluated. As a result, multiple assumptions underlying current safety standards are questioned. It is demonstrated that coupled EM-neuronal dynamics modeling involving realistic anatomies is valuable to establish conservative safety criteria.

  4. Unpacking assumptions about inclusion in community-based health promotion: perspectives of women living in poverty.

    Science.gov (United States)

    Ponic, Pamela; Frisby, Wendy

    2010-11-01

    Community-based health promoters often aim to facilitate "inclusion" when working with marginalized women to address their exclusion and related health issues. Yet the notion of inclusion has not been critically interrogated within this field, resulting in the perpetuation of assumptions that oversimplify it. We provide qualitative evidence on inclusion as a health-promotion strategy from the perspectives of women living in poverty. We collected data with women engaged in a 6-year community-based health promotion and feminist participatory action research project. Participants' experiences illustrated that inclusion was a multidimensional process that involved a dynamic interplay between structural determinants and individual agency. The women named multiple elements of inclusion across psychosocial, relational, organizational, and participatory dimensions. This knowledge interrupts assumptions that inclusion is achievable and desirable for so-called recipients of such initiatives. We thus call for critical consideration of the complexities, limitations, and possibilities of facilitating inclusion as a health-promotion strategy.

  5. The sexual victimization of men in America: new data challenge old assumptions.

    Science.gov (United States)

    Stemple, Lara; Meyer, Ilan H

    2014-06-01

    We assessed 12-month prevalence and incidence data on sexual victimization in 5 federal surveys that the Bureau of Justice Statistics, the Centers for Disease Control and Prevention, and the Federal Bureau of Investigation conducted independently in 2010 through 2012. We used these data to examine the prevailing assumption that men rarely experience sexual victimization. We concluded that federal surveys detect a high prevalence of sexual victimization among men-in many circumstances similar to the prevalence found among women. We identified factors that perpetuate misperceptions about men's sexual victimization: reliance on traditional gender stereotypes, outdated and inconsistent definitions, and methodological sampling biases that exclude inmates. We recommend changes that move beyond regressive gender assumptions, which can harm both women and men.

  6. Against Comprehensible Input: The Input Hypothesis and the Development of Second-language Competence.

    Science.gov (United States)

    White, Lydia

    1987-01-01

    Discusses several objections to Krashen's Input Hypothesis which states that language acquisition is the learners' understanding of a language at a stage slightly higher than their current one because of their understanding of extralinguistic cues of the language. (Author/LMO)

  7. Hybrid input function estimation using a single-input-multiple-output (SIMO) approach

    Science.gov (United States)

    Su, Yi; Shoghi, Kooresh I.

    2009-02-01

    A hybrid blood input function (BIF) model that incorporates region of interests (ROIs) based peak estimation and a two exponential tail model was proposed to describe the blood input function. The hybrid BIF model was applied to the single-input-multiple-output (SIMO) optimization based approach for BIF estimation using time activity curves (TACs) obtained from ROIs defined at left ventricle (LV) blood pool and myocardium regions of dynamic PET images. The proposed BIF estimation method was applied with 0, 1 and 2 blood samples as constraints for BIF estimation using simulated small animal PET data. Relative percentage difference of the area-under-curve (AUC) measurement between the estimated BIF and the true BIF was calculated to evaluate the BIF estimation accuracy. SIMO based BIF estimation using Feng's input function model was also applied for comparison. The hybrid method provided improved BIF estimation in terms of both mean accuracy and variability compared to Feng's model based BIF estimation in our simulation study. When two blood samples were used as constraints, the percentage BIF estimation error was 0.82 +/- 4.32% for the hybrid approach and 4.63 +/- 10.67% for the Feng's model based approach. Using hybrid BIF, improved kinetic parameter estimation was also obtained.

  8. Effect of input compression and input frequency response on music perception in cochlear implant users.

    Science.gov (United States)

    Halliwell, Emily R; Jones, Linor L; Fraser, Matthew; Lockley, Morag; Hill-Feltham, Penelope; McKay, Colette M

    2015-06-01

    A study was conducted to determine whether modifications to input compression and input frequency response characteristics can improve music-listening satisfaction in cochlear implant users. Experiment 1 compared three pre-processed versions of music and speech stimuli in a laboratory setting: original, compressed, and flattened frequency response. Music excerpts comprised three music genres (classical, country, and jazz), and a running speech excerpt was compared. Experiment 2 implemented a flattened input frequency response in the speech processor program. In a take-home trial, participants compared unaltered and flattened frequency responses. Ten and twelve adult Nucleus Freedom cochlear implant users participated in Experiments 1 and 2, respectively. Experiment 1 revealed a significant preference for music stimuli with a flattened frequency response compared to both original and compressed stimuli, whereas there was a significant preference for the original (rising) frequency response for speech stimuli. Experiment 2 revealed no significant mean preference for the flattened frequency response, with 9 of 11 subjects preferring the rising frequency response. Input compression did not alter music enjoyment. Comparison of the two experiments indicated that individual frequency response preferences may depend on the genre or familiarity, and particularly whether the music contained lyrics.

  9. How Much Input Is Enough? Correlating Comprehension and Child Language Input in an Endangered Language

    Science.gov (United States)

    Meakins, Felicity; Wigglesworth, Gillian

    2013-01-01

    In situations of language endangerment, the ability to understand a language tends to persevere longer than the ability to speak it. As a result, the possibility of language revival remains high even when few speakers remain. Nonetheless, this potential requires that those with high levels of comprehension received sufficient input as children for…

  10. Heterosexual assumptions in verbal and non-verbal communication in nursing.

    Science.gov (United States)

    Röndahl, Gerd; Innala, Sune; Carlsson, Marianne

    2006-11-01

    This paper reports a study of what lesbian women and gay men had to say, as patients and as partners, about their experiences of nursing in hospital care, and what they regarded as important to communicate about homosexuality and nursing. The social life of heterosexual cultures is based on the assumption that all people are heterosexual, thereby making homosexuality socially invisible. Nurses may assume that all patients and significant others are heterosexual, and these heteronormative assumptions may lead to poor communication that affects nursing quality by leading nurses to ask the wrong questions and make incorrect judgements. A qualitative interview study was carried out in the spring of 2004. Seventeen women and 10 men ranging in age from 23 to 65 years from different parts of Sweden participated. They described 46 experiences as patients and 31 as partners. Heteronormativity was communicated in waiting rooms, in patient documents and when registering for admission, and nursing staff sometimes showed perplexity when an informant deviated from this heteronormative assumption. Informants had often met nursing staff who showed fear of behaving incorrectly, which could lead to a sense of insecurity, thereby impeding further communication. As partners of gay patients, informants felt that they had to deal with heterosexual assumptions more than they did when they were patients, and the consequences were feelings of not being accepted as a 'true' relative, of exclusion and neglect. Almost all participants offered recommendations about how nursing staff could facilitate communication. Heterosexual norms communicated unconsciously by nursing staff contribute to ambivalent attitudes and feelings of insecurity that prevent communication and easily lead to misconceptions. Educational and management interventions, as well as increased communication, could make gay people more visible and thereby encourage openness and awareness by hospital staff of the norms that they

  11. Determination of the optimal periodic maintenance policy under imperfect repair assumption

    OpenAIRE

    Maria Luiza Guerra de Toledo

    2014-01-01

    . An appropriate maintenance policy is essential to reduce expenses and risks related to repairable systems failures. The usual assumptions of minimal or perfect repair at failures are not suitable for many real systems, requiring the application of Imperfect Repair models. In this work, the classes Arithmetic Reduction of Age and Arithmetic Reduction of Intensity, proposed by Doyen and Gaudoin (2004) are explored. Likelihood functions for such models are derived, and the parameters are es...

  12. RateMyProfessors.com: Testing Assumptions about Student Use and Misuse

    Science.gov (United States)

    Bleske-Rechek, April; Michels, Kelsey

    2010-01-01

    Since its inception in 1999, the RateMyProfessors.com (RMP.com) website has grown in popularity and, with that, notoriety. In this research we tested three assumptions about the website: (1) Students use RMP.com to either rant or rave; (2) Students who post on RMP.com are different from students who do not post; and (3) Students reward easiness by…

  13. Assumptions in quantitative analyses of health risks of overhead power lines

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, A.; Wardekker, J.A.; Van der Sluijs, J.P. [Department of Science, Technology and Society, Copernicus Institute, Utrecht University, Budapestlaan 6, 3584 CD Utrecht (Netherlands)

    2012-02-15

    One of the major issues hampering the formulation of uncontested policy decisions on contemporary risks is the presence of uncertainties in various stages of the policy cycle. In literature, different lines are suggested to address the problem of provisional and uncertain evidence. Reflective approaches such as pedigree analysis can be used to explore the quality of evidence when quantification of uncertainties is at stake. One of the issues where the quality of evidence impedes policy making, is the case of electromagnetic fields. In this case, a (statistical) association was suggested with an increased risk on childhood leukaemia in the vicinity of overhead power lines. A biophysical mechanism that could support this association was not found till date however. The Dutch government bases its policy concerning overhead power lines on the precautionary principle. For The Netherlands, previous studies have assessed the potential number of extra cases of childhood leukaemia due to the presence over overhead power lines. However, such a quantification of the health risk of EMF entails a (large) number of assumptions, both prior to and in the calculation chain. In this study, these assumptions were prioritized and critically appraised in an expert elicitation workshop, using a pedigree matrix for characterization of assumptions in assessments. It appeared that assumptions that were regarded to be important in quantifying the health risks show a high value-ladenness. The results show that, given the present state of knowledge, quantification of the health risks of EMF is premature. We consider the current implementation of the precautionary principle by the Dutch government to be adequate.

  14. Risk Pooling, Commitment and Information: An experimental test of two fundamental assumptions

    OpenAIRE

    Abigail Barr

    2003-01-01

    This paper presents rigorous and direct tests of two assumptions relating to limited commitment and asymmetric information that current underpin current models of risk pooling. A specially designed economic experiment involving 678 subjects across 23 Zimbabwean villages is used to solve the problems of observability and quantification that have frustrated previous attempts to conduct such tests. I find that more extrinsic commitment is associated with more risk pooling, but that more informat...

  15. Logic Assumptions and Risks Framework Applied to Defence Campaign Planning and Evaluation

    Science.gov (United States)

    2013-05-01

    Checkland, P. and J. Poulter (2006). Learning for Action:A Definitive Account of Soft Systems Methodology and its use for Practitioners, Teachers and... Systems Methodology alerts us to as differing ‘world views’. These are contrasted with assumptions about the causal linkages about the implementation...the problem and of the population, and the boundary, or limiting conditions, of the effects of the program – what Checkland and Poulter’s (2006) Soft

  16. Bootstrapping realized volatility and realized beta under a local Gaussianity assumption

    DEFF Research Database (Denmark)

    Hounyo, Ulrich

    The main contribution of this paper is to propose a new bootstrap method for statistics based on high frequency returns. The new method exploits the local Gaussianity and the local constancy of volatility of high frequency returns, two assumptions that can simplify inference in the high frequency...... simulations and use empirical data to compare the finite sample accuracy of our new bootstrap confidence intervals for integrated volatility and integrated beta with the existing results....

  17. Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions

    Science.gov (United States)

    Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.

    2015-01-01

    Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.

  18. Understanding the multiple realities of everyday life: basic assumptions in focus-group methodology.

    Science.gov (United States)

    Ivanoff, Synneve Dahlin; Hultberg, John

    2006-06-01

    In recent years, there has been a notable growth in the use of focus groups within occupational therapy. It is important to understand what kind of knowledge focus-group methodology is meant to acquire. The purpose of this article is to create an understanding of the basic assumptions within focus-group methodology from a theory of science perspective in order to elucidate and encourage reflection on the paradigm. This will be done based on a study of contemporary literature. To further the knowledge of basic assumptions the article will focus on the following themes: the focus-group research arena, the foundation and its core components; subjects, the role of the researcher and the participants; activities, the specific tasks and procedures. Focus-group methodology can be regarded as a specific research method within qualitative methodology with its own form of methodological criteria, as well as its own research procedures. Participants construct a framework to make sense of their experiences, and in interaction with others these experiences will be modified, leading to the construction of new knowledge. The role of the group leader is to facilitate a fruitful environment for the meaning to emerge and to ensure that the understanding of the meaning emerges independently of the interpreter. Focus-group methodology thus shares, in the authors' view, some basic assumptions with social constructivism.

  19. Combustion Effects in Laser-oxygen Cutting: Basic Assumptions, Numerical Simulation and High Speed Visualization

    Science.gov (United States)

    Zaitsev, Alexander V.; Ermolaev, Grigory V.

    Laser-oxygen cutting is very complicated for theoretical description technological process. Iron-oxygen combustion playing a leading role making it highly effective, able to cut thicker plates and, at the same time, producing special types of striations and other defects on the cut surface. In this paper results of numerical simulation based on elementary assumptions on iron-oxygen combustion are verified with high speed visualization of laser-oxygen cutting process. On a base of assumption that iron oxide lost its protective properties after melting simulation of striation formation due cycles of laser induced non self-sustained combustion is proposed. Assumption that reaction limiting factor is oxygen transport from the jet to cutting front allows to calculate reaction intensity by solving Navier - Stokes and diffusion system in gas phase. Influence of oxygen purity and pressure is studied theoretically. The results of numerical simulation are examined with high speed visualization of laser-oxygen cutting of 4-20 mm mild steel plates at cutting conditions close to industrial.

  20. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    Science.gov (United States)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  1. Efficient Accountable Authority Identity-Based Encryption under Static Complexity Assumptions

    CERN Document Server

    Libert, Benoît

    2008-01-01

    At Crypto'07, Goyal introduced the concept of Accountable Authority Identity-Based Encryption (A-IBE) as a convenient means to reduce the amount of trust in authorities in Identity-Based Encryption (IBE). In this model, if the Private Key Generator (PKG) maliciously re-distributes users' decryption keys, it runs the risk of being caught and prosecuted. Goyal proposed two constructions: a first one based on Gentry's IBE which relies on strong assumptions (such as q-Bilinear Diffie-Hellman Inversion) and a second one resting on the more classical Decision Bilinear Diffie-Hellman (DBDH) assumption but that is too inefficient for practical use. In this work, we propose a new construction that is secure assuming the hardness of the DBDH problem. The efficiency of our scheme is comparable with that of Goyal's main proposal with the advantage of relying on static assumptions (i.e. the strength of which does not depend on the number of queries allowed to the adversary). By limiting the number of adversarial rewinds i...

  2. Bayesian Mass Estimates of the Milky Way II: The dark and light sides of parameter assumptions

    CERN Document Server

    Eadie, Gwendolyn M

    2016-01-01

    We present mass and mass profile estimates for the Milky Way Galaxy using the Bayesian analysis developed by Eadie et al (2015b) and using globular clusters (GCs) as tracers of the Galactic potential. The dark matter and GCs are assumed to follow different spatial distributions; we assume power-law model profiles and use the model distribution functions described in Evans et al. (1997); Deason et al (2011, 2012a). We explore the relationships between assumptions about model parameters and how these assumptions affect mass profile estimates. We also explore how using subsamples of the GC population beyond certain radii affect mass estimates. After exploring the posterior distributions of different parameter assumption scenarios, we conclude that a conservative estimate of the Galaxy's mass within 125kpc is $5.22\\times10^{11} M_{\\odot}$, with a $50\\%$ probability region of $(4.79, 5.63) \\times10^{11} M_{\\odot}$. Extrapolating out to the virial radius, we obtain a virial mass for the Milky Way of $6.82\\times10^{...

  3. Differentiating Different Modeling Assumptions in Simulations of MagLIF loads on the Z Generator

    Science.gov (United States)

    Jennings, C. A.; Gomez, M. R.; Harding, E. C.; Knapp, P. F.; Ampleford, D. J.; Hansen, S. B.; Weis, M. R.; Glinsky, M. E.; Peterson, K.; Chittenden, J. P.

    2016-10-01

    Metal liners imploded by a fast rising (MagLIF experiments have had some success. While experiments are increasingly well diagnosed, many of the measurements (particularly during stagnation) are time integrated, limited in spatial resolution or require additional assumptions to interpret in the context of a structured, rapidly evolving system. As such, in validating MHD calculations, there is the potential for the same observables in the experimental data to be reproduced under different modeling assumptions. Using synthetic diagnostics of the results of different pre-heat, implosion and stagnation simulations run with the Gorgon MHD code, we discuss how the interpretation of typical Z diagnostics relate to more fundamental simulation parameters. We then explore the extent to which different assumptions on instability development, current delivery, high-Z mix into the fuel and initial laser deposition can be differentiated in our existing measurements. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.

  4. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    Science.gov (United States)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  5. What is this Substance? What Makes it Different? Mapping Progression in Students' Assumptions about Chemical Identity

    Science.gov (United States)

    Ngai, Courtney; Sevian, Hannah; Talanquer, Vicente

    2014-09-01

    Given the diversity of materials in our surroundings, one should expect scientifically literate citizens to have a basic understanding of the core ideas and practices used to analyze chemical substances. In this article, we use the term 'chemical identity' to encapsulate the assumptions, knowledge, and practices upon which chemical analysis relies. We conceive chemical identity as a core crosscutting disciplinary concept which can bring coherence and relevance to chemistry curricula at all educational levels, primary through tertiary. Although chemical identity is not a concept explicitly addressed by traditional chemistry curricula, its understanding can be expected to evolve as students are asked to recognize different types of substances and explore their properties. The goal of this contribution is to characterize students' assumptions about factors that determine chemical identity and to map how core assumptions change with training in the discipline. Our work is based on the review and critical analysis of existing research findings on students' alternative conceptions in chemistry education, and historical and philosophical analyses of chemistry. From this perspective, our analysis contributes to the growing body of research in the area of learning progressions. In particular, it reveals areas in which our understanding of students' ideas about chemical identity is quite robust, but also highlights the existence of major knowledge gaps that should be filled in to better foster student understanding. We provide suggestions in this area and discuss implications for the teaching of chemistry.

  6. The Universality of Intuition an aposteriori Criticize to an apriori Assumption

    Directory of Open Access Journals (Sweden)

    Roohollah Haghshenas

    2015-03-01

    Full Text Available Intuition has a central role in philosophy, the role to arbitrating between different opinions. When a philosopher shows that "intuition" supports his view, he thinks this is a good reason for him. In contrast, if we show some contraries between intuition and a theory or some implications of it, we think a replacement or at least some revisions would be needed. There are some well-known examples of this role for intuition in many fields of philosophy the transplant case in ethics, the chinese nation case in philosophy of mind and the Gettier examples in epistemology. But there is an assumption here we suppose all people think in same manner, i.e. we think intuition(s is universal. Experimental philosophy tries to study this assumption experimentally. This project continues Quine's movement to "pursuit of truth" from a naturalistic point of view and making epistemology "as a branch of natural science." The work of experimental philosophy shows that in many cases people with different cultural backgrounds reflect to some specific moral or epistemological cases –like Gettier examples- differently and thus intuition is not universal. So, many problems that are based on this assumption maybe dissolved, have plural forms for plural cultures or bounded to some specific cultures –western culture in many cases.

  7. An Adaptive Dynamic Surface Controller for Ultralow Altitude Airdrop Flight Path Angle with Actuator Input Nonlinearity

    Directory of Open Access Journals (Sweden)

    Mao-long Lv

    2016-01-01

    Full Text Available In the process of ultralow altitude airdrop, many factors such as actuator input dead-zone, backlash, uncertain external atmospheric disturbance, and model unknown nonlinearity affect the precision of trajectory tracking. In response, a robust adaptive neural network dynamic surface controller is developed. As a result, the aircraft longitudinal dynamics with actuator input nonlinearity is derived; the unknown nonlinear model functions are approximated by means of the RBF neural network. Also, an adaption strategy is used to achieve robustness against model uncertainties. Finally, it has been proved that all the signals in the closed-loop system are bounded and the tracking error converges to a small residual set asymptotically. Simulation results demonstrate the perfect tracking performance and strong robustness of the proposed method, which is not only applicable to the actuator with input dead-zone but also suitable for the backlash nonlinearity. At the same time, it can effectively overcome the effects of dead-zone and the atmospheric disturbance on the system and ensure the fast track of the desired flight path angle instruction, which overthrows the assumption that system functions must be known.

  8. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells.

    Science.gov (United States)

    Vidne, Michael; Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W; Kulkarni, Jayant; Litke, Alan M; Chichilnisky, E J; Simoncelli, Eero; Paninski, Liam

    2012-08-01

    Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations.

  9. Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment

    Science.gov (United States)

    Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.

    2015-01-01

    In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.

  10. A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data

    KAUST Repository

    Babuška, Ivo

    2010-01-01

    This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms. These input data are assumed to depend on a finite number of random variables. The method consists of a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space, and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It treats easily a wide range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method. Finally, we include a section with developments posterior to the original publication of this work. There we review sparse grid stochastic collocation methods, which are effective collocation strategies for problems that depend on a moderately large number of random variables.

  11. A Study on the Input Hypothesis and Interaction Hypothesis

    Institute of Scientific and Technical Information of China (English)

    李雪清

    2016-01-01

    In Second Language Acquisition theory, input and interaction are considered as two key factors greatly influencing the learners’acquisition rate and quality, and therefore input and interaction research has been receiving increasing attention in re-cent years. Among the large amount of research, Krashen’s input hypothesis and Long’s interaction hypothesis are perhaps most influential theories, from which most of input and interaction studies have developed. Input hypothesis claims that compre-hensible input is the only one way to acquire language, whereas interaction hypothesis argues that interaction is necessary for language acquisition. Therefore,this thesis attempts to conduct a descriptive analysis between input hypothesis and interaction hypothesis, based on their basic ideas, theoretical basis, comparisons and empirical work. It concludes that input hypothesis and interaction hypothesis succeed in interpreting the process of language acquisition to some extent, and offer both theoretical and practical inspirations on second language teaching.

  12. Characterization of Input Current Interharmonics in Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Zare, Firuz

    2017-01-01

    -edge symmetrical regularly sampled Space Vector Modulation (SVM) technique, on the input current interharmonic components are presented and discussed. Particular attention is also given to the influence of the asymmetrical regularly sampled modulation technique on the drive input current interharmonics...

  13. Vehicle Modeling for use in the CAFE model: Process description and modeling assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Moawad, Ayman [Argonne National Lab. (ANL), Argonne, IL (United States); Kim, Namdoo [Argonne National Lab. (ANL), Argonne, IL (United States); Rousseau, Aymeric [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-06-01

    The objective of this project is to develop and demonstrate a process that, at a minimum, provides more robust information that can be used to calibrate inputs applicable under the CAFE model’s existing structure. The project will be more fully successful if a process can be developed that minimizes the need for decision trees and replaces the synergy factors by inputs provided directly from a vehicle simulation tool. The report provides a description of the process that was developed by Argonne National Laboratory and implemented in Autonomie.

  14. Estimating nonstationary input signals from a single neuronal spike train

    OpenAIRE

    Kim, Hideaki; Shinomoto, Shigeru

    2012-01-01

    Neurons temporally integrate input signals, translating them into timed output spikes. Because neurons nonperiodically emit spikes, examining spike timing can reveal information about input signals, which are determined by activities in the populations of excitatory and inhibitory presynaptic neurons. Although a number of mathematical methods have been developed to estimate such input parameters as the mean and fluctuation of the input current, these techniques are based on the unrealistic as...

  15. Input modelling for subchannel analysis of CANFLEX fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Jun, Ji Su; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-06-01

    This report describs the input modelling for subchannel analysis of CANFLEX fuel bundle using CASS(Candu thermalhydraulic Analysis by Subchannel approacheS) code which has been developed for subchannel analysis of CANDU fuel channel. CASS code can give the different calculation results according to users' input modelling. Hence, the objective of this report provide the background information of input modelling, the accuracy of input data and gives the confidence of calculation results. (author). 11 refs., 3 figs., 4 tabs.

  16. Regional Input Output Table for the State of Punjab

    OpenAIRE

    Singh, Inderjeet; Singh, Lakhwinder

    2011-01-01

    Because of policy relevance of regional input-output analysis, a vast literature on the construction of regional input-output tables has emerged in the recent past, especially on the non-survey and hybrid methods. Although, construction of regional input-output tables is not new in India, but generation of input-output table using non-survey methods is relatively a rare phenomenon. This work validates alternative non-survey, location quotient methodologies and finally uses comparatively bette...

  17. Climate Change: Implications for the Assumptions, Goals and Methods of Urban Environmental Planning

    Directory of Open Access Journals (Sweden)

    Kristina Hill

    2016-12-01

    Full Text Available As a result of increasing awareness of the implications of global climate change, shifts are becoming necessary and apparent in the assumptions, concepts, goals and methods of urban environmental planning. This review will present the argument that these changes represent a genuine paradigm shift in urban environmental planning. Reflection and action to develop this paradigm shift is critical now and in the next decades, because environmental planning for cities will only become more urgent as we enter a new climate period. The concepts, methods and assumptions that urban environmental planners have relied on in previous decades to protect people, ecosystems and physical structures are inadequate if they do not explicitly account for a rapidly changing regional climate context, specifically from a hydrological and ecological perspective. The over-arching concept of spatial suitability that guided planning in most of the 20th century has already given way to concepts that address sustainability, recognizing the importance of temporality. Quite rapidly, the concept of sustainability has been replaced in many planning contexts by the priority of establishing resilience in the face of extreme disturbance events. Now even this concept of resilience is being incorporated into a novel concept of urban planning as a process of adaptation to permanent, incremental environmental changes. This adaptation concept recognizes the necessity for continued resilience to extreme events, while acknowledging that permanent changes are also occurring as a result of trends that have a clear direction over time, such as rising sea levels. Similarly, the methods of urban environmental planning have relied on statistical data about hydrological and ecological systems that will not adequately describe these systems under a new climate regime. These methods are beginning to be replaced by methods that make use of early warning systems for regime shifts, and process

  18. Climate Change: Implications for the Assumptions, Goals and Methods of Urban Environmental Planning

    Directory of Open Access Journals (Sweden)

    Kristina Hill

    2016-12-01

    Full Text Available As a result of increasing awareness of the implications of global climate change, shifts are becoming necessary and apparent in the assumptions, concepts, goals and methods of urban environmental planning. This review will present the argument that these changes represent a genuine paradigm shift in urban environmental planning. Reflection and action to develop this paradigm shift is critical now and in the next decades, because environmental planning for cities will only become more urgent as we enter a new climate period. The concepts, methods and assumptions that urban environmental planners have relied on in previous decades to protect people, ecosystems and physical structures are inadequate if they do not explicitly account for a rapidly changing regional climate context, specifically from a hydrological and ecological perspective. The over-arching concept of spatial suitability that guided planning in most of the 20th century has already given way to concepts that address sustainability, recognizing the importance of temporality. Quite rapidly, the concept of sustainability has been replaced in many planning contexts by the priority of establishing resilience in the face of extreme disturbance events. Now even this concept of resilience is being incorporated into a novel concept of urban planning as a process of adaptation to permanent, incremental environmental changes. This adaptation concept recognizes the necessity for continued resilience to extreme events, while acknowledging that permanent changes are also occurring as a result of trends that have a clear direction over time, such as rising sea levels. Similarly, the methods of urban environmental planning have relied on statistical data about hydrological and ecological systems that will not adequately describe these systems under a new climate regime. These methods are beginning to be replaced by methods that make use of early warning systems for regime shifts, and process

  19. 42 CFR 460.138 - Committees with community input.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Committees with community input. 460.138 Section 460.138 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... community input. A PACE organization must establish one or more committees, with community input, to do...

  20. Comparison of Linear Microinstability Calculations of Varying Input Realism

    Energy Technology Data Exchange (ETDEWEB)

    G. Rewoldt

    2003-09-08

    The effect of varying ''input realism'' or varying completeness of the input data for linear microinstability calculations, in particular on the critical value of the ion temperature gradient for the ion temperature gradient mode, is investigated using gyrokinetic and gyrofluid approaches. The calculations show that varying input realism can have a substantial quantitative effect on the results.

  1. On the Nature of the Input in Optimality Theory

    DEFF Research Database (Denmark)

    Heck, Fabian; Müller, Gereon; Vogel, Ralf;

    2002-01-01

    The input has two main functions in optimality theory (Prince and Smolensky 1993). First, the input defines the candidate set, in other words it determines which output candidates compete for optimality, and which do not. Second, the input is referred to by faithfulness constraints that prohibit...

  2. REFLECTIONS ON THE INOPERABILITY INPUT-OUTPUT MODEL

    NARCIS (Netherlands)

    Dietzenbacher, Erik; Miller, Ronald E.

    2015-01-01

    We argue that the inoperability input-output model is a straightforward - albeit potentially very relevant - application of the standard input-output model. In addition, we propose two less standard input-output approaches as alternatives to take into consideration when analyzing the effects of disa

  3. Waste treatment in physical input-output analysis

    NARCIS (Netherlands)

    Dietzenbacher, E

    2005-01-01

    When compared to monetary input-output tables (MIOTs), a distinctive feature of physical input-output tables (PIOTs) is that they include the generation of waste as part of a consistent accounting framework. As a consequence, however, physical input-output analysis thus requires that the treatment o

  4. Comparison between Input Hypothesis and Interaction Hypothesis

    Institute of Scientific and Technical Information of China (English)

    宗琦

    2016-01-01

    Second Language Acquisition has received more and more attention since 1950s when it becomes an autonomous field of research. Linguists have carried out many theoretical and empirical studies with a sharp purpose to promote Second Language Acquisition. Krashen’s Input Hypothesis and Long’s Interaction Hypothesis are most influential ones among the studies. They both play important roles in language teaching and learning. The paper will present an account of the two great theories, includ-ing the main claims, theoretical foundations as well as some related empirical works and try to investigate commons and differ-ences between them, based on literature and empirical studies. The purpose of writing this paper is to provide a clear outline of the two theories and point out how they are interrelated yet separate predictions about how second language are learned. It is meaningful because the results can be valuable guidance and highlights for language teachers and learners to teach or acquire a language better.

  5. Investigating Text Input Methods for Mobile Phones

    Directory of Open Access Journals (Sweden)

    Barry O’Riordan

    2005-01-01

    Full Text Available Human Computer Interaction is a primary factor in the success or failure of any device but if an objective view is taken of the current mobile phone market you would be forgiven for thinking usability was secondary to aesthetics. Many phone manufacturers modify the design of phones to be different than the competition and to target fashion trends, usually at the expense of usability and performance. There is a lack of awareness among many buyers of the usability of the device they are purchasing and the disposability of modern technology is an effect rather than a cause of this. Designing new text entry methods for mobile devices can be expensive and labour-intensive. The assessment and comparison of a new text entry method with current methods is a necessary part of the design process. The best way to do this is through an empirical evaluation. The aim of the study was to establish which mobile phone text input method best suits the requirements of a select group of target users. This study used a diverse range of users to compare devices that are in everyday use by most of the adult population. The proliferation of the devices is as yet unmatched by the study of their application and the consideration of their user friendliness.

  6. Parvalbumin-producing cortical interneurons receive inhibitory inputs on proximal portions and cortical excitatory inputs on distal dendrites.

    Science.gov (United States)

    Kameda, Hiroshi; Hioki, Hiroyuki; Tanaka, Yasuyo H; Tanaka, Takuma; Sohn, Jaerin; Sonomura, Takahiro; Furuta, Takahiro; Fujiyama, Fumino; Kaneko, Takeshi

    2012-03-01

    To examine inputs to parvalbumin (PV)-producing interneurons, we generated transgenic mice expressing somatodendritic membrane-targeted green fluorescent protein specifically in the interneurons, and completely visualized their dendrites and somata. Using immunolabeling for vesicular glutamate transporter (VGluT)1, VGluT2, and vesicular GABA transporter, we found that VGluT1-positive terminals made contacts 4- and 3.1-fold more frequently with PV-producing interneurons than VGluT2-positive and GABAergic terminals, respectively, in the primary somatosensory cortex. Even in layer 4, where VGluT2-positive terminals were most densely distributed, VGluT1-positive inputs to PV-producing interneurons were 2.4-fold more frequent than VGluT2-positive inputs. Furthermore, although GABAergic inputs to PV-producing interneurons were as numerous as VGluT2-positive inputs in most cortical layers, GABAergic inputs clearly preferred the proximal dendrites and somata of the interneurons, indicating that the sites of GABAergic inputs were more optimized than those of VGluT2-positive inputs. Simulation analysis with a PV-producing interneuron model compatible with the present morphological data revealed a plausible reason for this observation, by showing that GABAergic and glutamatergic postsynaptic potentials evoked by inputs to distal dendrites were attenuated to 60 and 87%, respectively, of those evoked by somatic inputs. As VGluT1-positive and VGluT2-positive axon terminals were presumed to be cortical and thalamic glutamatergic inputs, respectively, cortical excitatory inputs to PV-producing interneurons outnumbered the thalamic excitatory and intrinsic inhibitory inputs more than two-fold in any cortical layer. Although thalamic inputs are known to evoke about two-fold larger unitary excitatory postsynaptic potentials than cortical ones, the present results suggest that cortical inputs control PV-producing interneurons at least as strongly as thalamic inputs.

  7. Medial superior olivary neurons receive surprisingly few excitatory and inhibitory inputs with balanced strength and short-term dynamics.

    Science.gov (United States)

    Couchman, Kiri; Grothe, Benedikt; Felmy, Felix

    2010-12-15

    Neurons in the medial superior olive (MSO) process microsecond interaural time differences, the major cue for localizing low-frequency sounds, by comparing the relative arrival time of binaural, glutamatergic excitatory inputs. This coincidence detection mechanism is additionally shaped by highly specialized glycinergic inhibition. Traditionally, it is assumed that the binaural inputs are conveyed by many independent fibers, but such an anatomical arrangement may decrease temporal precision. Short-term depression on the other hand might enhance temporal fidelity during ongoing activity. For the first time we show that binaural coincidence detection in MSO neurons may require surprisingly few but strong inputs, challenging long-held assumptions about mammalian coincidence detection. This study exclusively uses adult gerbils for in vitro electrophysiology, single-cell electroporation and immunohistochemistry to characterize the size and short-term plasticity of inputs to the MSO. We find that the excitatory and inhibitory inputs to the MSO are well balanced both in strength and short-term dynamics, redefining this fastest of all mammalian coincidence detector circuits.

  8. Evaluation of Piloted Inputs for Onboard Frequency Response Estimation

    Science.gov (United States)

    Grauer, Jared A.; Martos, Borja

    2013-01-01

    Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.

  9. Adaptive control for an uncertain robotic manipulator with input saturations

    Institute of Scientific and Technical Information of China (English)

    Trong-Toan TRAN; Shuzhi Sam GE; Wei HE

    2016-01-01

    In this paper, we address the control problem of an uncertain robotic manipulator with input saturations, unknown input scalings and disturbances. For this purpose, a model reference adaptive control like (MRAC-like) is used to handle the input saturations. The model reference is input to state stable (ISS) and driven by the errors between the required control signals and input saturations. The uncertain parameters are dealt with by using linear-in-the-parameters property of robotic dynamics, while unknown input scalings and disturbances are handled by non-regressor based approach. Our design ensures that all the signals in the closed-loop system are bounded, and the tracking error converges to the compact set which depends on the predetermined bounds of the control inputs. Simulation on a planar elbow manipulator with two joints is provided to illustrate the effectiveness of the proposed controller.

  10. Micromachined dual input axis rate gyroscope

    Science.gov (United States)

    Juneau, Thor Nelson

    The need for inexpensive yet reliable angular rate sensors in fields ranging from automotive to consumer electronics has motivated prolific micromachined rate gyroscope research. The vast majority of research has focused on single input axis rate gyroscopes based upon either translational resonance, such as tuning forks, or structural mode resonance, such as vibrating rings. However, this work presents a novel, contrasting approach based on angular resonance of a rotating rigid rotor suspended by torsional springs. The inherent symmetry of the circular design allows angular rate measurement about two axes simultaneously, hence the name micromachined dual-axis rate gyroscope. The underlying theory of operation, mechanical structure design optimization, electrical interface circuitry, and signal processing are described in detail. Several operational versions were fabricated using two different fully integrated surface micromachining processes as proof of concept. The heart of the dual-axis rate gyroscope is a ˜2 mum thick polysilicon disk or rotor suspended above the substrate by a four beam suspension. When this rotor in driven into angular oscillation about the axis perpendicular to the substrate, a rotation rate about the two axes parallel to the substrate invokes an out of plane rotor tilting motion due to Coriolis acceleration. This tilting motion is capacitively measured and on board integrated signal processing provides two output voltages proportional to angular rate input about the two axes parallel to the substrate. The design process begins with the derivation of gyroscopic dynamics. The equations suggest that tuning sense mode frequencies to the drive oscillation frequency can vastly increase mechanical sensitivity. Hence the supporting four beam suspension is designed such that electrostatic tuning can match modes despite process variations. The electrostatic tuning range is limited only by rotor collapse to the substrate when tuning-voltage induced

  11. Input/output plugin architecture for MDSplus

    Energy Technology Data Exchange (ETDEWEB)

    Stillerman, Joshua, E-mail: jas@psfc.mit.edu [Massachusetts Institute of Technology, 175 Albany Street, Cambridge, MA 02139 (United States); Fredian, Thomas, E-mail: twf@psfc.mit.edu [Massachusetts Institute of Technology, 175 Albany Street, Cambridge, MA 02139 (United States); Manduchi, Gabriele, E-mail: gabriele.manduchi@igi.cnr.it [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, Padova 35127 (Italy)

    2014-05-15

    The first version of MDSplus was released in 1991 for VAX/VMS. Since that time the underlying file formats have remained constant. The software however has evolved, it was ported to unix, linux, Windows, and Macintosh. In 1997 a TCP based protocol, mdsip, was added to provide network access to MDSplus data. In 2011 a mechanism was added to allow protocol plugins to permit the use of other transport mechanisms such as ssh to access data users. This paper describes a similar design which permits the insertion of plugins to handle the reading and writing of MDSplus data at the data storage level. Tree paths become URIs which specify the protocol, host, and protocol specific information. The protocol is provided by a dynamically activated shared library that can provide any consistent subset of the data store access API, treeshr. The existing low level network protocol called mdsip, is activated by defining tree paths like “host::/directory”. Using the new plugin mechanism this is re-implemented as an instance of the general plugin that replaces the low level treeshr input/output routines. It is specified by using a path like “mdsip://host/directory”. This architecture will make it possible to adapt the MDSplus data organization and analysis tools to other underlying data storage. The first new application of this, after the existing network protocol is implemented, will be a plugin based on a key value store. Key value stores, can provide inexpensive scalable, redundant data storage. An example of this might be an Amazon G3 plugin which would let you specify a tree path such as “AG3://container” to access MDSplus data stored in the cloud.

  12. On the underlying assumptions of threshold Boolean networks as a model for genetic regulatory network behavior

    Science.gov (United States)

    Tran, Van; McCall, Matthew N.; McMurray, Helene R.; Almudevar, Anthony

    2013-01-01

    Boolean networks (BoN) are relatively simple and interpretable models of gene regulatory networks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks (GRN). We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN). Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled. We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions. Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles. PMID:24376454

  13. On the underlying assumptions of threshold Boolean networks as a model for genetic regulatory network behavior

    Directory of Open Access Journals (Sweden)

    Van eTran

    2013-12-01

    Full Text Available Boolean networks (BoN are relatively simple and interpretable models of gene regulatorynetworks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks.We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN. Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled.We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions.Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles.

  14. Old and new ideas for data screening and assumption testing for exploratory and confirmatory factor analysis

    Directory of Open Access Journals (Sweden)

    David B. Flora

    2012-03-01

    Full Text Available We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables.

  15. Old and new ideas for data screening and assumption testing for exploratory and confirmatory factor analysis.

    Science.gov (United States)

    Flora, David B; Labrish, Cathy; Chalmers, R Philip

    2012-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables.

  16. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-05-23

    Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.

  17. On the relevance of assumptions associated with classical factor analytic approaches.

    Science.gov (United States)

    Kasper, Daniel; Unlü, Ali

    2013-01-01

    A personal trait, for example a person's cognitive ability, represents a theoretical concept postulated to explain behavior. Interesting constructs are latent, that is, they cannot be observed. Latent variable modeling constitutes a methodology to deal with hypothetical constructs. Constructs are modeled as random variables and become components of a statistical model. As random variables, they possess a probability distribution in the population of reference. In applications, this distribution is typically assumed to be the normal distribution. The normality assumption may be reasonable in many cases, but there are situations where it cannot be justified. For example, this is true for criterion-referenced tests or for background characteristics of students in large scale assessment studies. Nevertheless, the normal procedures in combination with the classical factor analytic methods are frequently pursued, despite the effects of violating this "implicit" assumption are not clear in general. In a simulation study, we investigate whether classical factor analytic approaches can be instrumental in estimating the factorial structure and properties of the population distribution of a latent personal trait from educational test data, when violations of classical assumptions as the aforementioned are present. The results indicate that having a latent non-normal distribution clearly affects the estimation of the distribution of the factor scores and properties thereof. Thus, when the population distribution of a personal trait is assumed to be non-symmetric, we recommend avoiding those factor analytic approaches for estimation of a person's factor score, even though the number of extracted factors and the estimated loading matrix may not be strongly affected. An application to the Progress in International Reading Literacy Study (PIRLS) is given. Comments on possible implications for the Programme for International Student Assessment (PISA) complete the presentation.

  18. Evaluating the reliability of equilibrium dissolution assumption from residual gasoline in contact with water saturated sands

    Science.gov (United States)

    Lekmine, Greg; Sookhak Lari, Kaveh; Johnston, Colin D.; Bastow, Trevor P.; Rayner, John L.; Davis, Greg B.

    2017-01-01

    Understanding dissolution dynamics of hazardous compounds from complex gasoline mixtures is a key to long-term predictions of groundwater risks. The aim of this study was to investigate if the local equilibrium assumption for BTEX and TMBs (trimethylbenzenes) dissolution was valid under variable saturation in two dimensional flow conditions and evaluate the impact of local heterogeneities when equilibrium is verified at the scale of investigation. An initial residual gasoline saturation was established over the upper two-thirds of a water saturated sand pack. A constant horizontal pore velocity was maintained and water samples were recovered across 38 sampling ports over 141 days. Inside the residual NAPL zone, BTEX and TMBs dissolution curves were in agreement with the TMVOC model based on the local equilibrium assumption. Results compared to previous numerical studies suggest the presence of small scale dissolution fingering created perpendicular to the horizontal dissolution front, mainly triggered by heterogeneities in the medium structure and the local NAPL residual saturation. In the transition zone, TMVOC was able to represent a range of behaviours exhibited by the data, confirming equilibrium or near-equilibrium dissolution at the scale of investigation. The model locally showed discrepancies with the most soluble compounds, i.e. benzene and toluene, due to local heterogeneities exhibiting that at lower scale flow bypassing and channelling may have occurred. In these conditions mass transfer rates were still high enough to fall under the equilibrium assumption in TMVOC at the scale of investigation. Comparisons with other models involving upscaled mass transfer rates demonstrated that such approximations with TMVOC could lead to overestimate BTEX dissolution rates and underestimate the total remediation time.

  19. The Avalanche Hypothesis and Compression of Morbidity: Testing Assumptions through Cohort-Sequential Analysis.

    Directory of Open Access Journals (Sweden)

    Jordan Silberman

    Full Text Available The compression of morbidity model posits a breakpoint in the adult lifespan that separates an initial period of relative health from a subsequent period of ever increasing morbidity. Researchers often assume that such a breakpoint exists; however, this assumption is hitherto untested.To test the assumption that a breakpoint exists--which we term a morbidity tipping point--separating a period of relative health from a subsequent deterioration in health status. An analogous tipping point for healthcare costs was also investigated.Four years of adults' (N = 55,550 morbidity and costs data were retrospectively analyzed. Data were collected in Pittsburgh, PA between 2006 and 2009; analyses were performed in Rochester, NY and Ann Arbor, MI in 2012 and 2013. Cohort-sequential and hockey stick regression models were used to characterize long-term trajectories and tipping points, respectively, for both morbidity and costs.Morbidity increased exponentially with age (P<.001. A morbidity tipping point was observed at age 45.5 (95% CI, 41.3-49.7. An exponential trajectory was also observed for costs (P<.001, with a costs tipping point occurring at age 39.5 (95% CI, 32.4-46.6. Following their respective tipping points, both morbidity and costs increased substantially (Ps<.001.Findings support the existence of a morbidity tipping point, confirming an important but untested assumption. This tipping point, however, may occur earlier in the lifespan than is widely assumed. An "avalanche of morbidity" occurred after the morbidity tipping point-an ever increasing rate of morbidity progression. For costs, an analogous tipping point and "avalanche" were observed. The time point at which costs began to increase substantially occurred approximately 6 years before health status began to deteriorate.

  20. Bayesian Mass Estimates of the Milky Way: The Dark and Light Sides of Parameter Assumptions

    Science.gov (United States)

    Eadie, Gwendolyn M.; Harris, William E.

    2016-10-01

    We present mass and mass profile estimates for the Milky Way (MW) Galaxy using the Bayesian analysis developed by Eadie et al. and using globular clusters (GCs) as tracers of the Galactic potential. The dark matter and GCs are assumed to follow different spatial distributions; we assume power-law model profiles and use the model distribution functions described in Evans et al. and Deason et al. We explore the relationships between assumptions about model parameters and how these assumptions affect mass profile estimates. We also explore how using subsamples of the GC population beyond certain radii affect mass estimates. After exploring the posterior distributions of different parameter assumption scenarios, we conclude that a conservative estimate of the Galaxy’s mass within 125 kpc is 5.22× {10}11 {M}⊙ , with a 50% probability region of (4.79,5.63)× {10}11 {M}⊙ . Extrapolating out to the virial radius, we obtain a virial mass for the MW of 6.82× {10}11 {M}⊙ with 50% credible region of (6.06,7.53)× {10}11 {M}⊙ ({r}{vir}={185}-7+7 {{kpc}}). If we consider only the GCs beyond 10 kpc, then the virial mass is 9.02 (5.69,10.86)× {10}11 {M}⊙ ({r}{vir}={198}-24+19 kpc). We also arrive at an estimate of the velocity anisotropy parameter β of the GC population, which is β =0.28 with a 50% credible region (0.21, 0.35). Interestingly, the mass estimates are sensitive to both the dark matter halo potential and visible matter tracer parameters, but are not very sensitive to the anisotropy parameter.

  1. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    Science.gov (United States)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative

  2. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions.

    Science.gov (United States)

    Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant was compared for a series of model assumptions. Three different model approaches describing BNR are considered. In the reference case, the original model implementations are used to simulate WWTP1 (ASM1 & 3) and WWTP2 (ASM2d). The second set of models includes a reactive settler, which extends the description of the non-reactive TSS sedimentation and transport in the reference case with the full set of ASM processes. Finally, the third set of models is based on including electron acceptor dependency of biomass decay rates for ASM1 (WWTP1) and ASM2d (WWTP2). The results show that incorporation of a reactive settler: (1) increases the hydrolysis of particulates; (2) increases the overall plant's denitrification efficiency by reducing the S(NOx) concentration at the bottom of the clarifier; (3) increases the oxidation of COD compounds; (4) increases X(OHO) and X(ANO) decay; and, finally, (5) increases the growth of X(PAO) and formation of X(PHA,Stor) for ASM2d, which has a major impact on the whole P removal system. Introduction of electron acceptor dependent decay leads to a substantial increase of the concentration of X(ANO), X(OHO) and X(PAO) in the bottom of the clarifier. The paper ends with a critical discussion of the influence of the different model assumptions, and emphasizes the need for a model user to understand the significant differences in simulation results that are obtained when applying different combinations of 'standard' models.

  3. From the lab to the world: The paradigmatic assumption and the functional cognition of avian foraging

    Institute of Scientific and Technical Information of China (English)

    Danielle SULIKOWSKI; Darren BURKE

    2015-01-01

    Mechanisms of animal learning and memory were traditionally studied without reference to niche-specific functional considerations.More recently,ecological demands have informed such investigations,most notably with respect to foraging in birds.In parallel,behavioural ecologists,primarily concerned with functional optimization,have begun to consider the role of mechanistic factors,including cognition,to explain apparent deviations from optimal predictions.In the present paper we discuss the application of laboratory-based constructs and paradigms of cognition to the real-world challenges faced by avian foragers.We argue that such applications have been handicapped by what we term the 'paradigmatic assumption'-the assumption that a given laboratory paradigm maps well enough onto a congruent cognitive mechanism (or cognitive ability) to justify conflation of the two.We present evidence against the paradigmatic assumption and suggest that to achieve a profitable integration between function and mechanism,with respect to animal cognition,a new conceptualization of cognitive mechanisms-functional cognition-is required.This new conceptualization should define cognitive mechanisms based on the informational properties of the animal's environment and the adaptive challenges faced.Cognitive mechanisms must be examined in settings that mimic the im portant aspects of the natural environment,using customized tasks designed to probe defined aspects of the mechanisms' operation.We suggest that this approach will facilitate investigations of the functional and evolutionary relevance of cognitive mechanisms,as well as the patterns of divergence,convergence and specialization of cognitive mechanisms within and between species [Current Zoology 61 (2):328-340,2015].

  4. Assumption-versus data-based approaches to summarizing species' ranges.

    Science.gov (United States)

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2016-08-04

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  5. Drug Distribution to Human Tissues: Prediction and Examination of the Basic Assumption in In Vivo Pharmacokinetics-Pharmacodynamics (PK/PD) Research.

    Science.gov (United States)

    Poulin, Patrick

    2015-06-01

    The tissue:plasma partition coefficients (Kp ) are good indicators of the extent of tissue distribution. Therefore, advanced tissue composition-based models were used to predict the Kp values of drugs under in vivo conditions on the basis of in vitro and physiological input data. These models, however, focus on animal tissues and do not challenge the predictions with human tissues for drugs. The first objective of this study was to predict the experimentally determined Kp values of seven human tissues for 26 drugs. In all, 95% of the predicted Kp values are within 2.5-fold error of the observed values in humans. Accordingly, these results suggest that the tissue composition-based model used in this study is able to provide accurate estimates of drug partitioning in the studied human tissues. Furthermore, as the Kp equals to the ratio of total concentration between tissue and plasma, or the ratio of unbound fraction between plasma (fup ) and tissue (fut ), this parameter Kp would deviate from the unity. Therefore, the second objective was to examine the corresponding relationships between fup and fut values experimentally determined in humans for several drugs. The results also indicate that fup may significantly deviate to fut ; the discrepancies are governed by the dissimilarities in the binding and ionization on both sides of the membrane, which were captured by the tissue composition-based model. Hence, this violated the basic assumption in in vivo pharmacokinetics-pharmacodynamics (PK/PD) research, since the free drug concentration in tissue and plasma was not equal particularly for the ionizable drugs due to the pH gradient effect on the fraction of unionized drug in plasma (fuip ) and tissue (fuit ) (i.e., fup × fuip × total plasma concentration = fut × fuit × total tissue concentration, and, hence, the free drug concentration in plasma and tissue differed by fuip/fuit). Therefore, this assumption should be adjusted for the ionized drugs, and, hence, a

  6. Tests of data quality, scaling assumptions, and reliability of the Danish SF-36

    DEFF Research Database (Denmark)

    Bjorner, J B; Damsgaard, M T; Watt, T;

    1998-01-01

    We used general population data (n = 4084) to examine data completeness, response consistency, tests of scaling assumptions, and reliability of the Danish SF-36 Health Survey. We compared traditional multitrait scaling analyses to analyses using polychoric correlations and Spearman correlations...... discriminant validity, equal item-own scale correlations, and equal variances) were satisfactory in the total sample and in all subgroups. The SF-36 could discriminate between levels of health in all subgroups, but there were skewness, kurtosis, and ceiling effects in many subgroups (elderly people and people...

  7. Science with the Square Kilometer Array: Motivation, Key Science Projects, Standards and Assumptions

    CERN Document Server

    Carilli, C

    2004-01-01

    The Square Kilometer Array (SKA) represents the next major, and natural, step in radio astronomical facilities, providing two orders of magnitude increase in collecting area over existing telescopes. In a series of meetings, starting in Groningen, the Netherlands (August 2002) and culminating in a `science retreat' in Leiden (November 2003), the SKA International Science Advisory Committee (ISAC), conceived of, and carried-out, a complete revision of the SKA science case (to appear in New Astronomy Reviews). This preface includes: (i) general introductory material, (ii) summaries of the key science programs, and (iii) a detailed listing of standards and assumptions used in the revised science case.

  8. Condition for Energy Efficient Watermarking with Random Vector Model without WSS Assumption

    CERN Document Server

    Yan, Bin; Guo, Yinjing

    2009-01-01

    Energy efficient watermarking preserves the watermark energy after linear attack as much as possible. We consider in this letter non-stationary signal models and derive conditions for energy efficient watermarking under random vector model without WSS assumption. We find that the covariance matrix of the energy efficient watermark should be proportional to host covariance matrix to best resist the optimal linear removal attacks. In WSS process our result reduces to the well known power spectrum condition. Intuitive geometric interpretation of the results are also discussed which in turn also provide more simpler proof of the main results.

  9. Expressing Environment Assumptions and Real-time Requirements for a Distributed Embedded System with Shared Variables

    DEFF Research Database (Denmark)

    Tjell, Simon; Fernandes, João Miguel

    2008-01-01

    In a distributed embedded system, it is often necessary to share variables among its computing nodes to allow the distribution of control algorithms. It is therefore necessary to include a component in each node that provides the service of variable sharing. For that type of component, this paper...... discusses how to create a Colored Petri Nets (CPN) model that formally expresses the following elements in a clearly separated structure: (1) assumptions about the behavior of the environment of the component, (2) real-time requirements for the component, and (3) a possible solution in terms of an algorithm...

  10. Local conservation scores without a priori assumptions on neutral substitution rates

    Directory of Open Access Journals (Sweden)

    Hagenauer Joachim

    2008-04-01

    Full Text Available Abstract Background Comparative genomics aims to detect signals of evolutionary conservation as an indicator of functional constraint. Surprisingly, results of the ENCODE project revealed that about half of the experimentally verified functional elements found in non-coding DNA were classified as unconstrained by computational predictions. Following this observation, it has been hypothesized that this may be partly explained by biased estimates on neutral evolutionary rates used by existing sequence conservation metrics. All methods we are aware of rely on a comparison with the neutral rate and conservation is estimated by measuring the deviation of a particular genomic region from this rate. Consequently, it is a reasonable assumption that inaccurate neutral rate estimates may lead to biased conservation and constraint estimates. Results We propose a conservation signal that is produced by local Maximum Likelihood estimation of evolutionary parameters using an optimized sliding window and present a Kullback-Leibler projection that allows multiple different estimated parameters to be transformed into a conservation measure. This conservation measure does not rely on assumptions about neutral evolutionary substitution rates and little a priori assumptions on the properties of the conserved regions are imposed. We show the accuracy of our approach (KuLCons on synthetic data and compare it to the scores generated by state-of-the-art methods (phastCons, GERP, SCONE in an ENCODE region. We find that KuLCons is most often in agreement with the conservation/constraint signatures detected by GERP and SCONE while qualitatively very different patterns from phastCons are observed. Opposed to standard methods KuLCons can be extended to more complex evolutionary models, e.g. taking insertion and deletion events into account and corresponding results show that scores obtained under this model can diverge significantly from scores using the simpler model

  11. Untested assumptions: psychological research and credibility assessment in legal decision-making

    Directory of Open Access Journals (Sweden)

    Jane Herlihy

    2015-05-01

    Full Text Available Background: Trauma survivors often have to negotiate legal systems such as refugee status determination or the criminal justice system. Methods & results: We outline and discuss the contribution which research on trauma and related psychological processes can make to two particular areas of law where complex and difficult legal decisions must be made: in claims for refugee and humanitarian protection, and in reporting and prosecuting sexual assault in the criminal justice system. Conclusion: There is a breadth of psychological knowledge that, if correctly applied, would limit the inappropriate reliance on assumptions and myth in legal decision-making in these settings. Specific recommendations are made for further study.

  12. What is a god? Metatheistic assumptions in Old Testament Yahwism(s

    Directory of Open Access Journals (Sweden)

    J W Gericke

    2006-09-01

    Full Text Available In this article, the author provides a prolegomena to further research attempting to answer a most undamental and basic question � much more so than what has thus far been the case in the disciplines of Old Testament theology and history of Israelite religion. It concerns the implicit assumptions in the Hebrew Bible�s discourse about the fundamental nature of deity. In other words, the question is not, �What is� YHWH like?� but rather , �what, according to the Old Testament texts, is a god?�

  13. Bases, Assumptions, and Results of the Flowsheet Calculations for the Decision Phase Salt Disposition Alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.; Campbell, S.G.

    2001-03-26

    The High Level Waste (HLW) Salt Disposition Systems Engineering Team was formed on March 13, 1998, and chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted wasteform. This requirement arises because the existing In-Tank Precipitation process at the Savannah River Site, as currently configured, cannot simultaneously meet the HLW production and Authorization Basis safety requirements. This engineering study was performed in four phases. This document provides the technical bases, assumptions, and results of this engineering study.

  14. Examining Assumptions and Limitations of Research on the Effects of Emerging Technologies for Teaching and Learning in Higher Education

    Science.gov (United States)

    Kirkwood, Adrian; Price, Linda

    2013-01-01

    This paper examines assumptions and beliefs underpinning research into educational technology. It critically reviews some approaches used to investigate the impact of technologies for teaching and learning. It focuses on comparative studies, performance comparisons and attitudinal studies to illustrate how under-examined assumptions lead to…

  15. The Assumption of Proportional Components when Candecomp Is Applied to Symmetric Matrices in the Context of INDSCAL

    Science.gov (United States)

    Dosse, Mohammed Bennani; Berge, Jos M. F.

    2008-01-01

    The use of Candecomp to fit scalar products in the context of INDSCAL is based on the assumption that the symmetry of the data matrices involved causes the component matrices to be equal when Candecomp converges. Ten Berge and Kiers gave examples where this assumption is violated for Gramian data matrices. These examples are believed to be local…

  16. On the Impact of the Dutch Educational Supervision Act : Analyzing Assumptions Concerning the Inspection of Primary Education

    NARCIS (Netherlands)

    Ehren, Melanie C. M.; Leeuw, Frans L.; Scheerens, Jaap

    2001-01-01

    This article uses a policy scientific approach to reconstruct assumptions underlying the Dutch Educational Supervision Act.We showan example of howto reconstruct and evaluate a program theory that is based on legislation of inspection. The assumptions explain how inspection leads to school improveme

  17. The Assumption of Proportional Components when Candecomp Is Applied to Symmetric Matrices in the Context of INDSCAL

    Science.gov (United States)

    Dosse, Mohammed Bennani; Berge, Jos M. F.

    2008-01-01

    The use of Candecomp to fit scalar products in the context of INDSCAL is based on the assumption that the symmetry of the data matrices involved causes the component matrices to be equal when Candecomp converges. Ten Berge and Kiers gave examples where this assumption is violated for Gramian data matrices. These examples are believed to be local…

  18. Distributed Adaptive Containment Control for a Class of Nonlinear Multiagent Systems With Input Quantization.

    Science.gov (United States)

    Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu

    2017-05-05

    This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.

  19. Coordinated tracking of linear multiagent systems with input saturation and stochastic disturbances.

    Science.gov (United States)

    Wang, Qingling; Sun, Changyin

    2017-07-21

    This paper addresses the coordinated tracking problem for linear multiagent systems with input saturation and stochastic disturbances. The objective is to construct a class of tracking control laws that achieve consensus tracking in the absence of disturbances, while guaranteeing a bounded variance of the state difference between the follower agent and the leader in the present of disturbances, under the assumptions that each agent is asymptotically null controllable with bounded controls (ANCBC) and the network is connected. By using the low gain feedback technique, a class of tracking control algorithms are proposed, and the coordinated tracking problem is solved through some routine manipulations. Finally, numerical examples are provided to demonstrate the theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Rapid Airplane Parametric Input Design (RAPID)

    Science.gov (United States)

    Smith, Robert E.

    1995-01-01

    RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool

  1. Washing machines, driers and dishwashers. Background reports. Vol. 1: Basic assumptions and impact analyses

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-06-01

    Before analyzing wet appliances to establish a common European Union (EU) basis for defining efficiency in domestic wet appliances, a framework has to be set up. The first part of this Background Report deals with such a framework and with definitions, basic assumptions and test methods. The next sections give a short introduction on the framework of wet appliances and definitions taken from international standards. Chapter 2 elaborates on basic assumptions regarding appliance categories, capacity, energy efficiency and performance. Chapter 3 contains a survey of test methods from international standard and chapter 4 shows the present state of standard in International Standardization Organization (IEC) and Commite Europeen de Normalisation Electrotechnique (CENELEC). The next two chapter of the report deal with the user of wet appliances: the consumer. Analysis in more detail aspects of daily use, such as ownership level, frequency of use, type of programme used is given. An important question for this study is whether a `European consumer` exists; section 5.5 deals with this subject. Two elements of the marketing mix: product and price are considered. Several possible product options are reviewed and attention is paid to the impact of price on conumsers` buying decicions. The findings of this report and recommendations are summarized. (au)

  2. Testing surrogacy assumptions: can threatened and endangered plants be grouped by biological similarity and abundances?

    Science.gov (United States)

    Che-Castaldo, Judy P; Neel, Maile C

    2012-01-01

    There is renewed interest in implementing surrogate species approaches in conservation planning due to the large number of species in need of management but limited resources and data. One type of surrogate approach involves selection of one or a few species to represent a larger group of species requiring similar management actions, so that protection and persistence of the selected species would result in conservation of the group of species. However, among the criticisms of surrogate approaches is the need to test underlying assumptions, which remain rarely examined. In this study, we tested one of the fundamental assumptions underlying use of surrogate species in recovery planning: that there exist groups of threatened and endangered species that are sufficiently similar to warrant similar management or recovery criteria. Using a comprehensive database of all plant species listed under the U.S. Endangered Species Act and tree-based random forest analysis, we found no evidence of species groups based on a set of distributional and biological traits or by abundances and patterns of decline. Our results suggested that application of surrogate approaches for endangered species recovery would be unjustified. Thus, conservation planning focused on individual species and their patterns of decline will likely be required to recover listed species.

  3. Maximizing the Delivery of MPR Broadcasting Under Realistic Physical Layer Assumptions

    Institute of Scientific and Technical Information of China (English)

    Francois Ingelrest; David Simplot-Ryl

    2008-01-01

    It is now commonly accepted that the unit disk graph used to model the physical layer in wireless network sdoes not reflect real radio transmissions, and that a more realistic model should be considered for experimental simulations.Previous work on realistic scenarios has been focused on unicast, however broadcast requirements are fundamentally different and cannot be derived from the unicast case. There fore, the broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay broadcast protocol (MPR), in which each node has to choose a set of 1-hop neighbors to act as relays in order to cover the whole 2-hop neighborhood. We giveexperimental results showing that the original strategy used to select these multipoint relays does not suit a realistic model.On the basis of these results, we propose new selection strategies solely based on link quality. One of the key aspects of our solutions is that our strategies do not require any additional hardware and may be implemented at the application layer,which is particularly relevant to the context of ad hoc and sensor networks where energy savings are mandatory. We finall yprovide new experimental results that demonstrate the superiority of our strategies under realistic physical assumptions.

  4. Questioning the foundations of physics which of our fundamental assumptions are wrong?

    CERN Document Server

    Foster, Brendan; Merali, Zeeya

    2015-01-01

    The essays in this book look at way in which the fundaments of physics might need to be changed in order to make progress towards a unified theory. They are based on the prize-winning essays submitted to the FQXi essay competition “Which of Our Basic Physical Assumptions Are Wrong?”, which drew over 270 entries. As Nobel Laureate physicist Philip W. Anderson realized, the key to understanding nature’s reality is not anything “magical”, but the right attitude, “the focus on asking the right questions, the willingness to try (and to discard) unconventional answers, the sensitive ear for phoniness, self-deception, bombast, and conventional but unproven assumptions.” The authors of the eighteen prize-winning essays have, where necessary, adapted their essays for the present volume so as to (a) incorporate the community feedback generated in the online discussion of the essays, (b) add new material that has come to light since their completion and (c) to ensure accessibility to a broad audience of re...

  5. The ozone depletion potentials on halocarbons: Their dependence of calculation assumptions

    Science.gov (United States)

    Karol, Igor L.; Kiselev, Andrey A.

    1994-01-01

    The concept of Ozone Depletion Potential (ODP) is widely used in the evaluation of numerous halocarbons and of their replacement effects on ozone, but the methods, assumptions and conditions used in ODP calculations have not been analyzed adequately. In this paper a model study of effects on ozone of the instantaneous releases of various amounts of CH3CCl3 and of CHF2Cl (HCFC-22) for several compositions of the background atmosphere are presented, aimed at understanding connections of ODP values with the assumptions used in their calculations. To facilitate the ODP computation in numerous versions for the long time periods after their releases, the above rather short-lived gases and the one-dimensional radiative photochemical model of the global annually averaged atmospheric layer up to 50 km height are used. The variation of released gas global mass from 1 Mt to 1 Gt leads to ODP value increase with its stabilization close to the upper bound of this range in the contemporary atmosphere. The same variations are analyzed for conditions of the CFC-free atmosphere of 1960's and for the anthropogenically loaded atmosphere in the 21st century according to the known IPCC 'business as usual' scenario. Recommendations for proper ways of ODP calculations are proposed for practically important cases.

  6. Testing assumptions of the enemy release hypothesis: generalist versus specialist enemies of the grass Brachypodium sylvaticum.

    Science.gov (United States)

    Halbritter, Aud H; Carroll, George C; Güsewell, Sabine; Roy, Bitty A

    2012-01-01

    The enemy release hypothesis (ERH) suggests greater success of species in an invaded range due to release from natural enemies. The ERH assumes there will be more specialist enemies in the native range and that generalists will have an equal effect in both ranges. We tested these assumptions with the grass Brachypodium sylvaticum in the native range (Switzerland) and invaded range (Oregon, USA). We assessed all the kinds of damage present (caused by fungi, insects, mollusk and deer) on both leaves and seeds at 10 sites in each range and correlated damage with host fitness. Only two of the 20 fungi found on leaves were specialist pathogens, and these were more frequent in the native range. Conversely there was more insect herbivory on leaves in the invaded range. All fungi and insects found on seeds were generalists. More species of fungi were found on seeds in the native range, and a higher proportion of them were pathogenic than in the invaded range. There were more kinds of enemies in the native range, where the plants had lower fitness, in accordance with the ERH. However, contrary to assumptions of the ERH, generalists appear to be equally or more important than specialists in reducing host fitness.

  7. How do our prior assumptions about basal drag affect ice sheet forecasts?

    Science.gov (United States)

    Arthern, Robert

    2015-04-01

    Forecasts of changes in the large ice sheets of Greenland and Antarctica often begin with an inversion to select initial values for state variables and parameters in the model, such as basal drag and ice viscosity. These inversions can be ill-posed in the sense that many different choices for the parameter values can match the observational data equally well. To recover a mathematically well-posed problem, assumptions must be made that restrict the possible values of the parameters, either by regularisation or by explicit definition of Bayesian priors. Common assumptions are that parameters vary smoothly in space or lie close to some preferred initial guess, but for glaciological inversions it is often unclear how smoothly the parameters should vary, or how reliable the initial guess should be considered. This is especially true of inversions for the basal drag coefficient that can vary enormously from place to place on length scales set by subglacial hydrology, which is itself extremely poorly constrained by direct observations. Here we use a combination of forward modelling, inversion and a theoretical analysis based on transformation group priors to investigate different ways of introducing prior information about parameters, and to consider the consequences for ice sheet forecasts.

  8. Bias in regression coefficient estimates when assumptions for handling missing data are violated: a simulation study

    Directory of Open Access Journals (Sweden)

    Sander MJ van Kuijk

    2016-03-01

    Full Text Available BackgroundThe purpose of this simulation study is to assess the performance of multiple imputation compared to complete case analysis when assumptions of missing data mechanisms are violated.MethodsThe authors performed a stochastic simulation study to assess the performance of Complete Case (CC analysis and Multiple Imputation (MI with different missing data mechanisms (missing completely at random (MCAR, at random (MAR, and not at random (MNAR. The study focused on the point estimation of regression coefficients and standard errors.ResultsWhen data were MAR conditional on Y, CC analysis resulted in biased regression coefficients; they were all underestimated in our scenarios. In these scenarios, analysis after MI gave correct estimates. Yet, in case of MNAR MI yielded biased regression coefficients, while CC analysis performed well.ConclusionThe authors demonstrated that MI was only superior to CC analysis in case of MCAR or MAR. In some scenarios CC may be superior over MI. Often it is not feasible to identify the reason why data in a given dataset are missing. Therefore, emphasis should be put on reporting the extent of missing values, the method used to address them, and the assumptions that were made about the mechanism that caused missing data.

  9. Moral dilemmas in professions of public trust and the assumptions of ethics of social consequences

    Directory of Open Access Journals (Sweden)

    Dubiel-Zielińska Paulina

    2016-06-01

    Full Text Available The aim of the article is to show the possibility of applying assumptions from ethics of social consequences when making decisions about actions, as well as in situations of moral dilemmas, by persons performing occupations of public trust on a daily basis. Reasoning in the article is analytical and synthetic. Article begins with an explanation of the basic concepts of “profession” and “the profession of public trust” and a manifestation of the difference between these terms. This is followed by a general description of professions of public trust. The area and definition of moral dilemmas is emphasized. Furthermore, representatives of professions belonging to them are listed. After a brief characterization of axiological foundations and the main assumptions of ethics of social consequences, actions according to Vasil Gluchman and Włodzimierz Galewicz are discussed and actions in line with ethics of social consequences are transferred to the practical domain. The article points out that actions in professional life are obligatory, impermissible, permissible, supererogatory and unmarked in the moral dimension. In the final part of the article an afterthought is included on how to solve moral dilemmas when in the position of a representative of the profession of public trust. The article concludes with a summary report containing the conclusions that stem from ethics of social consequences for professions of public trust, followed by short examples.

  10. Assessing moderated mediation in linear models requires fewer confounding assumptions than assessing mediation.

    Science.gov (United States)

    Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2016-11-01

    It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up.

  11. Temporal Distinctiveness in Task Switching: Assessing the Mixture-Distribution Assumption

    Directory of Open Access Journals (Sweden)

    James A Grange

    2016-02-01

    Full Text Available In task switching, increasing the response--cue interval has been shown to reduce the switch cost. This has been attributed to a time-based decay process influencing the activation of memory representations of tasks (task-sets. Recently, an alternative account based on interference rather than decay has been successfully applied to this data (Horoufchin et al., 2011. In this account, variation of the RCI is thought to influence the temporal distinctiveness (TD of episodic traces in memory, thus affecting their retrieval probability. This can affect performance as retrieval probability influences response time: If retrieval succeeds, responding is fast due to positive priming; if retrieval fails, responding is slow, due to having to perform the task via a slow algorithmic process. This account---and a recent formal model (Grange & Cross, 2015---makes the strong prediction that all RTs are a mixture of one of two processes: a fast process when retrieval succeeds, and a slow process when retrieval fails. The present paper assesses the evidence for this mixture-distribution assumption in TD data. In a first section, statistical evidence for mixture-distributions is found using the fixed-point property test. In a second section, a mathematical process model with mixture-distributions at its core is fitted to the response time distribution data. Both approaches provide good evidence in support of the mixture-distribution assumption, and thus support temporal distinctiveness accounts of the data.

  12. Bothe's 1925 heuristic assumption in the dawn of quantum field theory

    Science.gov (United States)

    Fick, D.

    2013-01-01

    In an unpublished manuscript filed at the Archive of the Max-Planck Society in Berlin, Walther Bothe (1891-1957) put, with one heuristic assumption, the spontaneous and induced transitions of light quanta, on an equal footing, probably as early as 1925. In modern terms, he assumed that the probability for the creation of a light quantum in a phase space cell already containing s light quanta is proportional to s + 1 and not, as assumed at that time, proportional to s; that is proportional to the fraction of the total radiation density which belongs to s light quanta. For Bothe, the added +1 somehow replaced the spontaneous decay and allowed him to treat empty phase space cells in a black body as thermodynamically consistent. We describe in some detail Bothe's route to this heuristic trick. Finally we discuss why, both Bose's and Bothe's heuristic assumptions lead to an identical distribution law for light quanta in a black body and thus to Planck's law and Einstein's fluctuation formula.

  13. Inhibitory Gating of Input Comparison in the CA1 Microcircuit.

    Science.gov (United States)

    Milstein, Aaron D; Bloss, Erik B; Apostolides, Pierre F; Vaidya, Sachin P; Dilly, Geoffrey A; Zemelman, Boris V; Magee, Jeffrey C

    2015-09-23

    Spatial and temporal features of synaptic inputs engage integration mechanisms on multiple scales, including presynaptic release sites, postsynaptic dendrites, and networks of inhibitory interneurons. Here we investigate how these mechanisms cooperate to filter synaptic input in hippocampal area CA1. Dendritic recordings from CA1 pyramidal neurons reveal that proximal inputs from CA3 as well as distal inputs from entorhinal cortex layer III (ECIII) sum sublinearly or linearly at low firing rates due to feedforward inhibition, but sum supralinearly at high firing rates due to synaptic facilitation, producing a high-pass filter. However, during ECIII and CA3 input comparison, supralinear dendritic integration is dynamically balanced by feedforward and feedback inhibition, resulting in suppression of dendritic complex spiking. We find that a particular subpopulation of CA1 interneurons expressing neuropeptide Y (NPY) contributes prominently to this dynamic filter by integrating both ECIII and CA3 input pathways and potently inhibiting CA1 pyramidal neuron dendrites.

  14. Uncertainty of input data for room acoustic simulations

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Marbjerg, Gerd; Brunskog, Jonas

    2016-01-01

    summarizes potential advanced absorption measurement techniques that can improve the quality of input data for room acoustic simulations. Lastly, plenty of uncertain input data are copied from unreliable sources. Software developers and users should be careful when spreading such uncertain input data. More......Although many room acoustic simulation models have been well established, simulation results will never be accurate with inaccurate and uncertain input data. This study addresses inappropriateness and uncertainty of input data for room acoustic simulations. Firstly, the random incidence absorption......-included input data are proven to produce perceptually noticeable changes in the objective parameters, such as the sound pressure level, and loudness-based reverberation time. Surfaces should not be assumed to be locally reacting, particularly for multi-layered absorbers having air cavities. Secondly...

  15. Comparison of Parameter Estimations Using Dual-Input and Arterial-Input in Liver Kinetic Studies of FDG Metabolism.

    Science.gov (United States)

    Cui, Yunfeng; Bai, Jing

    2005-01-01

    Liver kinetic study of [18F]2-fluoro-2-deoxy-D-glucose (FDG) metabolism in human body is an important tool for functional modeling and glucose metabolic rate estimation. In general, the arterial blood time-activity curve (TAC) and the tissue TAC are required as the input and output functions for the kinetic model. For liver study, however, the arterial-input may be not consistent with the actual model input because the liver has a dual blood supply from the hepatic artery (HA) and the portal vein (PV) to the liver. In this study, the result of model parameter estimation using dual-input function is compared with that using arterial-input function. First, a dynamic positron emission tomography (PET) experiment is performed after injection of FDG into the human body. The TACs of aortic blood, PV blood, and five regions of interest (ROIs) in liver are obtained from the PET image. Then, the dual-input curve is generated by calculating weighted sum of both the arterial and PV input curves. Finally, the five liver ROIs' kinetic parameters are estimated with arterial-input and dual-input functions respectively. The results indicate that the two methods provide different parameter estimations and the dual-input function may lead to more accurate parameter estimation.

  16. Analytical delay models for RLC interconnects under ramp input

    Institute of Scientific and Technical Information of China (English)

    REN Yinglei; MAO Junfa; LI Xiaochun

    2007-01-01

    Analytical delay models for Resistance Inductance Capacitance (RLC)interconnects with ramp input are presented for difierent situations,which include overdamped,underdamped and critical response cases.The errors of delay estimation using the analytical models proposed in this paper are less bv 3%in comparison to the SPICE-computed delay.These models are meaningful for the delay analysis of actual circuits in which the input signal is ramp but not ideal step input.

  17. The Application of Input Theory to English Classroom Teaching

    Institute of Scientific and Technical Information of China (English)

    刘坤

    2015-01-01

    Early in the 1980s, Stephen Krashen has proposed a comprehensive and overall Input Theory that explains how the sec⁃ond language is acquired. It is still very referential to present English classroom teaching. In this essay, applications of Input Theo⁃ry to English classroom teaching are developed from six aspects, involving the nature of second language acquisition, comprehensi⁃ble input and so on.

  18. Knowledge Management in Customer Integration: A Customer Input Management System

    OpenAIRE

    Füller, Kathrin; Abud, Elias; Böhm, Markus; Krcmar, Helmut

    2016-01-01

    Customers can take an active role in the innovation process and provide their input (e.g., ideas, idea evaluations, or complaints) to the different phases of the innovation process. However, the management of a huge amount of unstructured customer input poses a challenge for companies. Existing software solutions focus on the early stages of idea management, and neglect the interoperability of tools, sharing, and reuse of customer inputs across innovation cycles and departments. Following the...

  19. Assumptions about footprint layer heights influence the quantification of emission sources: a case study for Cyprus

    Science.gov (United States)

    Hüser, Imke; Harder, Hartwig; Heil, Angelika; Kaiser, Johannes W.

    2017-09-01

    Lagrangian particle dispersion models (LPDMs) in backward mode are widely used to quantify the impact of transboundary pollution on downwind sites. Most LPDM applications count particles with a technique that introduces a so-called footprint layer (FL) with constant height, in which passing air tracer particles are assumed to be affected by surface emissions. The mixing layer dynamics are represented by the underlying meteorological model. This particle counting technique implicitly assumes that the atmosphere is well mixed in the FL. We have performed backward trajectory simulations with the FLEXPART model starting at Cyprus to calculate the sensitivity to emissions of upwind pollution sources. The emission sensitivity is used to quantify source contributions at the receptor and support the interpretation of ground measurements carried out during the CYPHEX campaign in July 2014. Here we analyse the effects of different constant and dynamic FL height assumptions. The results show that calculations with FL heights of 100 and 300 m yield similar but still discernible results. Comparison of calculations with FL heights constant at 300 m and dynamically following the planetary boundary layer (PBL) height exhibits systematic differences, with daytime and night-time sensitivity differences compensating for each other. The differences at daytime when a well-mixed PBL can be assumed indicate that residual inaccuracies in the representation of the mixing layer dynamics in the trajectories may introduce errors in the impact assessment on downwind sites. Emissions from vegetation fires are mixed up by pyrogenic convection which is not represented in FLEXPART. Neglecting this convection may lead to severe over- or underestimations of the downwind smoke concentrations. Introducing an extreme fire source from a different year in our study period and using fire-observation-based plume heights as reference, we find an overestimation of more than 60  % by the constant FL height

  20. Krashen’s Input Hypothesis and Foreign Language Teaching

    Institute of Scientific and Technical Information of China (English)

    彭辉

    2013-01-01

    Krashen’s Input Hypothesis is one of the most important theories in second language acquisition.The theory provides a good theoretical framework for foreign language teaching in China.The paper introduces the basic ideas of Krashen’s second language acquisition theories,the concept of comprehensible input,and Krashen’s interpretation of input hypothesis.Thus,this paper aims to study Krashen’s Comprehensible Input and attempts to discover how to facilitate China’s foreign language teaching.

  1. Electronically Tunable High Input Impedance Voltage-Mode Multifunction Filter

    Science.gov (United States)

    Chen, Hua-Pin; Yang, Wan-Shing

    A novel electronically tunable high input impedance voltage-mode multifunction filter with single inputs and three outputs employing two single-output-operational transconductance amplifiers, one differential difference current conveyor and two capacitors is proposed. The presented filter can be realized the highpass, bandpass and lowpass functions, simultaneously. The input of the filter exhibits high input impedance so that the synthesized filter can be cascaded without additional buffers. The circuit needs no any external resistors and employs two grounded capacitors, which is suitable for integrated circuit implementation.

  2. The effects of redundant control inputs in optimal control

    Institute of Scientific and Technical Information of China (English)

    DUAN ZhiSheng; HUANG Lin; YANG Ying

    2009-01-01

    For a stabillzable system,the extension of the control inputs has no use for stabllizability,but it is important for optimal control.In this paper,a necessary and sufficient condition is presented to strictly decrease the quadratic optimal performance index after control input extensions.A similar result is also provided for H_2 optimal control problem.These results show an essential difference between single-input and multi-input control systems.Several examples are taken to illustrate related problems.

  3. Characteristic of energy input for laser forming sheet metal

    Institute of Scientific and Technical Information of China (English)

    Liqun Li(李俐群); Yanbin Chen(陈彦宾); Xiaosong Feng(封小松)

    2003-01-01

    Laser forming is a process in which laser-induced thermal deformation is used to form sheet metal withouta hard forming tool or external forces. The energy input of laser beam is the key factor for the temperatureand stress distribution of sheet metal. The purpose of this work is to investigate the influence of energyinput condition on heat input and deformation angle for two-dimension laser forming. Variations in heatinput resulting from material deformation was calculated and discussed in this paper at first. Furthermore,in laser forming under the condition of constant laser energy input, the effects of energy input mode ondeformation angle and temperature field were investigated.

  4. Effect of correlated inputs on DO (dissolved oxygen) uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.C.; Song, Q.

    1988-06-01

    Although uncertainty analysis has been discussed in recent water-quality-modeling literature, much of the work has assumed that all input variables and parameters are mutually independent. The objective of this paper is to evaluate the importance of correlation among the model inputs in the study of model-output uncertainty. The model used for demonstrating the influence of input-variable correlation is the Streeter-Phelps dissolved oxygen equation. The model forms the basis of many of the water-quality models currently in use and the relationships between model inputs and output-state variables are well understood.

  5. Orthogonal topography in the parallel input architecture of songbird HVC.

    Science.gov (United States)

    Elliott, Kevin C; Wu, Wei; Bertram, Richard; Hyson, Richard L; Johnson, Frank

    2017-06-15

    Neural activity within the cortical premotor nucleus HVC (acronym is name) encodes the learned songs of adult male zebra finches (Taeniopygia guttata). HVC activity is driven and/or modulated by a group of five afferent nuclei (the Medial Magnocellular nucleus of the Anterior Nidopallium, MMAN; Nucleus Interface, NIf; nucleus Avalanche, Av; the Robust nucleus of the Arcopallium, RA; the Uvaeform nucleus, Uva). While earlier evidence suggested that HVC receives a uniformly distributed and nontopographic pattern of afferent input, recent evidence suggests this view is incorrect (Basista et al., ). Here, we used a double-labeling strategy (varying both the distance between and the axial orientation of dual tracer injections into HVC) to reveal a massively parallel and in some cases topographic pattern of afferent input. Afferent neurons target only one rostral or caudal location within medial or lateral HVC, and each HVC location receives convergent input from each afferent nucleus in parallel. Quantifying the distributions of single-labeled cells revealed an orthogonal topography in the organization of afferent input from MMAN and NIf, two cortical nuclei necessary for song learning. MMAN input is organized across the lateral-medial axis whereas NIf input is organized across the rostral-caudal axis. To the extent that HVC activity is influenced by afferent input during the learning, perception, or production of song, functional models of HVC activity may need revision to account for the parallel input architecture of HVC, along with the orthogonal input topography of MMAN and NIf. © 2017 Wiley Periodicals, Inc.

  6. REEXAMINING THE ROLE OF INPUT AND THE FEATURES OF OPTIMAL INPUT——(Ⅰ)Role of Input:Why Is Input Essential for Learning to Take Place?

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The paper examines the role of input from a psychologicalperspective.By exploring the relation between language andthought,and the functions of memory,the paper aims to revealthat language,as a medium of thought,cannot be isolatedfrom thought in the thinking process.Therefore,input in thetarget language is to enable the learner to think in that language.Another idea borrowed from Psychology is the phenomenon offorgetting,which is resulted from interference.We argue thatproviding sufficient input for the learner is one of the effectiveways to minimize the degree of interference.The role of input isthen seen as the following:(1)fighting off mother tongueinterference;(2)internalizing L2 grammar;(3)defossilizingand maintaining interlanguage competence;(4)learningvocabulary in context.

  7. Tale of Two Courthouses: A Critique of the Underlying Assumptions in Chronic Disease Self-Management for Aboriginal People

    Directory of Open Access Journals (Sweden)

    Isabelle Ellis

    2009-12-01

    Full Text Available This article reviews the assumptions that underpin thecommonly implemented Chronic Disease Self-Managementmodels. Namely that there are a clear set of instructions forpatients to comply with, that all health care providers agreewith; and that the health care provider and the patient agreewith the chronic disease self-management plan that wasdeveloped as part of a consultation. These assumptions areevaluated for their validity in the remote health care context,particularly for Aboriginal people. These assumptions havebeen found to lack validity in this context, therefore analternative model to enhance chronic disease care isproposed.

  8. Optical tests of Bell's inequalities not resting upon the absurd fair sampling assumption

    CERN Document Server

    Santos, E

    2004-01-01

    A simple local hidden-variables model is exhibited which reproduces the results of all performed tests of Bell\\'{}s inequalities involving optical photon pairs. For the old atomic-cascade experiments, like Aspect\\'{}s, the model agrees with quantum mechanics even for ideal set-ups. For more recent experiments, using parametric down-converted photons, the agreement occurs only for actual experiments, involving low efficiency detectors. Arguments are given against the fair sampling assumption, currently combined with the results of the experiments in order to claim a contradiction with local realism. New tests are proposed which are able to discriminate between quantum mechanics and a restricted, but appealing, family of local hidden-variables models. Such tests require detectors with efficiencies just above 20%.

  9. HARDINESS, WORLD ASSUMPTIONS, MOTIVATION OF ATHLETES OF CONTACT AND NOT CONTACT KINDS OF SPORT

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Molchanova

    2017-04-01

    Full Text Available Investigation of personal psychological specificity of athletes of contact (freestyle wrestling and not contact (archery kinds of sport were carried out. Pronounced deviation in hardiness, world assumptions, motives for sport doing were obtained. In particularly, archery athletes possess higher values of hardiness and positively view the world, than wrestlers, while possess less motives for sport doing as “successful for life quality and skills” and “physical perfection”. Thus for athletes not contact kinds of sports rather coping in permanent stressed conditions are predicted. The obtained results are practically important for counseling work of sport psychologists and moreover they could be a basement for training teach programs and challenge stress overcoming programs.

  10. Ecological risk of anthropogenic pollutants to reptiles: Evaluating assumptions of sensitivity and exposure.

    Science.gov (United States)

    Weir, Scott M; Suski, Jamie G; Salice, Christopher J

    2010-12-01

    A large data gap for reptile ecotoxicology still persists; therefore, ecological risk assessments of reptiles usually incorporate the use of surrogate species. This necessitates that (1) the surrogate is at least as sensitive as the target taxon and/or (2) exposures to the surrogate are greater than that of the target taxon. We evaluated these assumptions for the use of birds as surrogates for reptiles. Based on a survey of the literature, birds were more sensitive than reptiles in less than 1/4 of the chemicals investigated. Dietary and dermal exposure modeling indicated that exposure to reptiles was relatively high, particularly when the dermal route was considered. We conclude that caution is warranted in the use of avian receptors as surrogates for reptiles in ecological risk assessment and emphasize the need to better understand the magnitude and mechanism of contaminant exposure in reptiles to improve exposure and risk estimation.

  11. A critical test of the assumption that men prefer conformist women and women prefer nonconformist men.

    Science.gov (United States)

    Hornsey, Matthew J; Wellauer, Richard; McIntyre, Jason C; Barlow, Fiona Kate

    2015-06-01

    Five studies tested the common assumption that women prefer nonconformist men as romantic partners, whereas men prefer conformist women. Studies 1 and 2 showed that both men and women preferred nonconformist romantic partners, but women overestimated the extent to which men prefer conformist partners. In Study 3, participants ostensibly in a small-group interaction showed preferences for nonconformist opposite-sex targets, a pattern that was particularly evident when men evaluated women. Dating success was greater the more nonconformist the sample was (Study 4), and perceptions of nonconformity in an ex-partner were associated with greater love and attraction toward that partner (Study 5). On the minority of occasions in which effects were moderated by gender, it was in the reverse direction to the traditional wisdom: Conformity was more associated with dating success among men. The studies contradict the notion that men disproportionately prefer conformist women. © 2015 by the Society for Personality and Social Psychology, Inc.

  12. The "invention" of lesbian acts in Iran: interpretative moves, hidden assumptions, and emerging categories of sexuality.

    Science.gov (United States)

    Bucar, Elizabeth M; Shirazi, Faegheh

    2012-01-01

    This article describes and explains the current official status of lesbianism in Iran. Our central question is why the installation of an Islamic government in Iran resulted in extreme regulations of sexuality. The authors argue that rather than a clear adoption of "Islamic teaching on lesbianism," the current regime of sexuality was "invented" through a series of interpretative moves, adoption of hidden assumptions, and creation of sexual categories. This article is organized into two sections. The first sets the scene of official sexuality in Iran through a summary of (1) the sections of the Iranian Penal code dealing with same-sex acts and (2) government support for sexual reassignment surgeries. The second section traces the "invention" of a dominant post-revolutionary Iranian view of Islam and sexuality through identifying a number of specific interpretive moves this view builds on.

  13. Impact of velocity distribution assumption on simplified laser speckle imaging equation

    Science.gov (United States)

    Ramirez-San-Juan, Julio C; Ramos-Garcia, Ruben; Guizar-Iturbide, Ileana; Martinez-Niconoff, Gabriel; Choi, Bernard

    2012-01-01

    Since blood flow is tightly coupled to the health status of biological tissue, several instruments have been developed to monitor blood flow and perfusion dynamics. One such instrument is laser speckle imaging. The goal of this study was to evaluate the use of two velocity distribution assumptions (Lorentzian- and Gaussian-based) to calculate speckle flow index (SFI) values. When the normalized autocorrelation function for the Lorentzian and Gaussian velocity distributions satisfy the same definition of correlation time, then the same velocity range is predicted for low speckle contrast (0 < C < 0.6) and predict different flow velocity range for high contrast. Our derived equations form the basis for simplified calculations of SFI values. PMID:18542407

  14. Modeling assumptions influence on stress and strain state in 450 t cranes hoisting winch construction

    Directory of Open Access Journals (Sweden)

    Damian GĄSKA

    2011-01-01

    Full Text Available This work investigates the FEM simulation of stress and strain state of the selected trolley’s load-carrying structure with 450 tones hoisting capacity [1]. Computational loads were adopted as in standard PN-EN 13001-2. Model of trolley was built from several cooperating with each other (in contact parts. The influence of model assumptions (simplification in selected construction nodes to the value of maximum stress and strain with its area of occurrence was being analyzed. The aim of this study was to determine whether the simplification, which reduces the time required to prepare the model and perform calculations (e.g., rigid connection instead of contact are substantially changing the characteristics of the model.

  15. Meso-scale modeling: beyond local equilibrium assumption for multiphase flow

    CERN Document Server

    Wang, Wei

    2015-01-01

    This is a summary of the article with the same title, accepted for publication in Advances in Chemical Engineering, 47: 193-277 (2015). Gas-solid fluidization is a typical nonlinear nonequilibrium system with multiscale structure. In particular, the mesoscale structure in terms of bubbles or clusters, which can be characterized by nonequilibrium features in terms of bimodal velocity distribution, energy non equipartition, and correlated density fluctuations, is the critical factor. Traditional two-fluid model (TFM) and relevant closures depend on local equilibrium and homogeneous distribution assumptions, and fail to predict the dynamic, nonequilibrium phenomena in circulating fluidized beds even with fine-grid resolution. In contrast, the mesoscale modeling, as exemplified by the energy-minimization multiscale (EMMS) model, is consistent with the nonequilibrium features in multiphase flows. Thus, the structure-dependent multi-fluid model conservation equations with the EMMS-based mesoscale modeling greatly i...

  16. Special relativity as the limit of an Aristotelian universal friction theory under Reye's assumption

    CERN Document Server

    Minguzzi, E

    2014-01-01

    This work explores a classical mechanical theory under two further assumptions: (a) there is a universal dry friction force (Aristotelian mechanics), and (b) the variation of the mass of a body due to wear is proportional to the work done by the friction force on the body (Reye's hypothesis). It is shown that mass depends on velocity as in Special Relativity, and that the velocity is constant for a particular characteristic value. In the limit of vanishing friction the theory satisfies a relativity principle as bodies do not decelerate and, therefore, the absolute frame becomes unobservable. However, the limit theory is not Newtonian mechanics, with its Galilei group symmetry, but rather Special Relativity. This result suggests to regard Special Relativity as the limit of a theory presenting universal friction and exchange of mass-energy with a reservoir (vacuum). Thus, quite surprisingly, Special Relativity follows from the absolute space (ether) concept and could have been discovered following studies of Ar...

  17. Evaluation of Horizontal Electric Field Under Different Lightning Current Models by Perfect Ground Assumption

    Institute of Scientific and Technical Information of China (English)

    LIANG Jianfeng; LI Yanming

    2012-01-01

    Lightning electromagnetics can affect the reliability of the power system or communication system.Therefore,evaluation of electromagnetic fields generated by lightning return stroke is indispensable.Arnold sommerfeld proposed a model to calculate the electromagnetic field,but it involved the time-consuming sommerfeld integral.However,perfect conductor ground assumption can account for fast calculation,thus this paper reviews the perfect ground equation for evaluation of lightning electromagnetic fields,presents three engineering lightning return stroke models,and calculates the horizontal electric field caused by three lightning return stroke models.According to the results,the amplitude of lightning return stroke has a strong impact on horizontal electric fields,and the steepness of lightning return stroke influences the horizontal electric fields.Moreover,the perfect ground method is faster than the sommerfeld integral method.

  18. Premiums for Long-Term Care Insurance Packages: Sensitivity with Respect to Biometric Assumptions

    Directory of Open Access Journals (Sweden)

    Ermanno Pitacco

    2016-02-01

    Full Text Available Long-term care insurance (LTCI covers are rather recent products, in the framework of health insurance. It follows that specific biometric data are scanty; pricing and reserving problems then arise because of difficulties in the choice of appropriate technical bases. Different benefit structures imply different sensitivity degrees with respect to changes in biometric assumptions. Hence, an accurate sensitivity analysis can help in designing LTCI products and, in particular, in comparing stand-alone products to combined products, i.e., packages including LTCI benefits and other lifetime-related benefits. Numerical examples show, in particular, that the stand-alone cover is much riskier than all of the LTCI combined products that we have considered. As a consequence, the LTCI stand-alone cover is a highly “absorbing” product as regards capital requirements for solvency purposes.

  19. Probabilistic tsunami hazard assessment for the Makran region with focus on maximum magnitude assumption

    Science.gov (United States)

    Hoechner, Andreas; Babeyko, Andrey Y.; Zamora, Natalia

    2016-06-01

    Despite having been rather seismically quiescent for the last decades, the Makran subduction zone is capable of hosting destructive earthquakes and tsunami. In particular, the well-known thrust event in 1945 (Balochistan earthquake) led to about 4000 casualties. Nowadays, the coastal regions are more densely populated and vulnerable to similar events. Furthermore, some recent publications discuss rare but significantly larger events at the Makran subduction zone as possible scenarios. We analyze the instrumental and historical seismicity at the subduction plate interface and generate various synthetic earthquake catalogs spanning 300 000 years with varying magnitude-frequency relations. For every event in the catalogs we compute estimated tsunami heights and present the resulting tsunami hazard along the coasts of Pakistan, Iran and Oman in the form of probabilistic tsunami hazard curves. We show how the hazard results depend on variation of the Gutenberg-Richter parameters and especially maximum magnitude assumption.

  20. Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption

    Directory of Open Access Journals (Sweden)

    Zheping Yan

    2014-01-01

    Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.