WorldWideScience

Sample records for tinuum approximation ignores

  1. The Varieties of Ignorance

    DEFF Research Database (Denmark)

    Nottelmann, Nikolaj

    2016-01-01

    This chapter discusses varieties of ignorance divided according to kind (what the subject is ignorant of), degree, and order (e.g. ignorance of ignorance equals second-order ignorance). It provides analyses of notions such as factual ignorance, erotetic ignorance (ignorance of answers to question...

  2. Ignoring Ignorance: Notes on Pedagogical Relationships in Citizen Science

    Directory of Open Access Journals (Sweden)

    Michael Scroggins

    2017-04-01

    Full Text Available Theoretically, this article seeks to broaden the conceptualization of ignorance within STS by drawing on a line of theory developed in the philosophy and anthropology of education to argue that ignorance can be productively conceptualized as a state of possibility and that doing so can enable more democratic forms of citizen science. In contrast to conceptualizations of ignorance as a lack, lag, or manufactured product, ignorance is developed here as both the opening move in scientific inquiry and the common ground over which that inquiry proceeds. Empirically, the argument is developed through an ethnographic description of Scroggins' participation in a failed citizen science project at a DIYbio laboratory. Supporting the empirical case are a review of the STS literature on expertise and a critical examination of the structures of participation within two canonical citizen science projects. Though onerous, through close attention to how people transform one another during inquiry, increasingly democratic forms of citizen science, grounded in the commonness of ignorance, can be put into practice.

  3. Organizational Ignorance

    DEFF Research Database (Denmark)

    Lange, Ann-Christina

    2016-01-01

    This paper provides an analysis of strategic uses of ignorance or not-knowing in one of the most secretive industries within the financial sector. The focus of the paper is on the relation between imitation and ignorance within the organizational structure of high-frequency trading (HFT) firms...... and investigate the kinds of imitations that might be produced from structures of not-knowing (i.e. structures intended to divide, obscure and protect knowledge). This point is illustrated through ethnographic studies and interviews within five HFT firms. The data show how a black-box structure of ignorance...

  4. Strategic Self-Ignorance

    DEFF Research Database (Denmark)

    Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.

    We examine strategic self-ignorance—the use of ignorance as an excuse to overindulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals — a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three of five...... subjects (58 percent) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....

  5. Strategic self-ignorance

    DEFF Research Database (Denmark)

    Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.

    2016-01-01

    We examine strategic self-ignorance—the use of ignorance as an excuse to over-indulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals—a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three of five...... subjects (58%) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....

  6. Clash of Ignorance

    Directory of Open Access Journals (Sweden)

    Mahmoud Eid

    2012-06-01

    Full Text Available The clash of ignorance thesis presents a critique of the clash of civilizations theory. It challenges the assumptions that civilizations are monolithic entities that do not interact and that the Self and the Other are always opposed to each other. Despite some significantly different values and clashes between Western and Muslim civilizations, they overlap with each other in many ways and have historically demonstrated the capacity for fruitful engagement. The clash of ignorance thesis makes a significant contribution to the understanding of intercultural and international communication as well as to the study of inter-group relations in various other areas of scholarship. It does this by bringing forward for examination the key impediments to mutually beneficial interaction between groups. The thesis directly addresses the particular problem of ignorance that other epistemological approaches have not raised in a substantial manner. Whereas the critique of Orientalism deals with the hegemonic construction of knowledge, the clash of ignorance paradigm broadens the inquiry to include various actors whose respective distortions of knowledge symbiotically promote conflict with each other. It also augments the power-knowledge model to provide conceptual and analytical tools for understanding the exploitation of ignorance for the purposes of enhancing particular groups’ or individuals’ power. Whereas academics, policymakers, think tanks, and religious leaders have referred to the clash of ignorance concept, this essay contributes to its development as a theory that is able to provide a valid basis to explain the empirical evidence drawn from relevant cases.

  7. Ignore and Conquer.

    Science.gov (United States)

    Conroy, Mary

    1989-01-01

    Discusses how teachers can deal with student misbehavior by ignoring negative behavior that is motivated by a desire for attention. Practical techniques are described for pinpointing attention seekers, enlisting classmates to deal with misbehaving students, ignoring misbehavior, and distinguishing behavior that responds to this technique from…

  8. Ignorability for categorical data

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2005-01-01

    We study the problem of ignorability in likelihood-based inference from incomplete categorical data. Two versions of the coarsened at random assumption (car) are distinguished, their compatibility with the parameter distinctness assumption is investigated and several conditions for ignorability...

  9. The logic of strategic ignorance.

    Science.gov (United States)

    McGoey, Linsey

    2012-09-01

    Ignorance and knowledge are often thought of as opposite phenomena. Knowledge is seen as a source of power, and ignorance as a barrier to consolidating authority in political and corporate arenas. This article disputes this, exploring the ways that ignorance serves as a productive asset, helping individuals and institutions to command resources, deny liability in the aftermath of crises, and to assert expertise in the face of unpredictable outcomes. Through a focus on the Food and Drug Administration's licensing of Ketek, an antibiotic drug manufactured by Sanofi-Aventis and linked to liver failure, I suggest that in drug regulation, different actors, from physicians to regulators to manufacturers, often battle over who can attest to the least knowledge of the efficacy and safety of different drugs - a finding that raises new insights about the value of ignorance as an organizational resource. © London School of Economics and Political Science 2012.

  10. From dissecting ignorance to solving algebraic problems

    International Nuclear Information System (INIS)

    Ayyub, Bilal M.

    2004-01-01

    Engineers and scientists are increasingly required to design, test, and validate new complex systems in simulation environments and/or with limited experimental results due to international and/or budgetary restrictions. Dealing with complex systems requires assessing knowledge and information by critically evaluating them in terms relevance, completeness, non-distortion, coherence, and other key measures. Using the concepts and definitions from evolutionary knowledge and epistemology, ignorance is examined and classified in the paper. Two ignorance states for a knowledge agent are identified: (1) non-reflective (or blind) state, i.e. the person does not know of self-ignorance, a case of ignorance of ignorance; and (2) reflective state, i.e. the person knows and recognizes self-ignorance. Ignorance can be viewed to have a hierarchal classification based on its sources and nature as provided in the paper. The paper also explores limits on knowledge construction, closed and open world assumptions, and fundamentals of evidential reasoning using belief revision and diagnostics within the framework of ignorance analysis for knowledge construction. The paper also examines an algebraic problem set as identified by Sandia National Laboratories to be a basic building block for uncertainty propagation in computational mechanics. Solution algorithms are provided for the problem set for various assumptions about the state of knowledge about its parameters

  11. From dissecting ignorance to solving algebraic problems

    Energy Technology Data Exchange (ETDEWEB)

    Ayyub, Bilal M

    2004-09-01

    Engineers and scientists are increasingly required to design, test, and validate new complex systems in simulation environments and/or with limited experimental results due to international and/or budgetary restrictions. Dealing with complex systems requires assessing knowledge and information by critically evaluating them in terms relevance, completeness, non-distortion, coherence, and other key measures. Using the concepts and definitions from evolutionary knowledge and epistemology, ignorance is examined and classified in the paper. Two ignorance states for a knowledge agent are identified: (1) non-reflective (or blind) state, i.e. the person does not know of self-ignorance, a case of ignorance of ignorance; and (2) reflective state, i.e. the person knows and recognizes self-ignorance. Ignorance can be viewed to have a hierarchal classification based on its sources and nature as provided in the paper. The paper also explores limits on knowledge construction, closed and open world assumptions, and fundamentals of evidential reasoning using belief revision and diagnostics within the framework of ignorance analysis for knowledge construction. The paper also examines an algebraic problem set as identified by Sandia National Laboratories to be a basic building block for uncertainty propagation in computational mechanics. Solution algorithms are provided for the problem set for various assumptions about the state of knowledge about its parameters.

  12. Is Ignorance of Climate Change Culpable?

    Science.gov (United States)

    Robichaud, Philip

    2017-10-01

    Sometimes ignorance is an excuse. If an agent did not know and could not have known that her action would realize some bad outcome, then it is plausible to maintain that she is not to blame for realizing that outcome, even when the act that leads to this outcome is wrong. This general thought can be brought to bear in the context of climate change insofar as we think (a) that the actions of individual agents play some role in realizing climate harms and (b) that these actions are apt targets for being considered right or wrong. Are agents who are ignorant about climate change and the way their actions contribute to it excused because of their ignorance, or is their ignorance culpable? In this paper I examine these questions from the perspective of recent developments in the theories of responsibility for ignorant action and characterize their verdicts. After developing some objections to existing attempts to explore these questions, I characterize two influential theories of moral responsibility and discuss their implications for three different types of ignorance about climate change. I conclude with some recommendations for how we should react to the face of the theories' conflicting verdicts. The answer to the question posed in the title, then, is: "Well, it's complicated."

  13. Ignorance, information and autonomy

    OpenAIRE

    Harris, J.; Keywood, K.

    2001-01-01

    People have a powerful interest in genetic privacy and its associated claim to ignorance, and some equally powerful desires to be shielded from disturbing information are often voiced. We argue, however, that there is no such thing as a right to remain in ignorance, where a right is understood as an entitlement that trumps competing claims. This does not of course mean that information must always be forced upon unwilling recipients, only that there is no prima facie entitlement to be protect...

  14. The Power of Ignorance | Code | Philosophical Papers

    African Journals Online (AJOL)

    Taking my point of entry from George Eliot's reference to 'the power of Ignorance', I analyse some manifestations of that power as she portrays it in the life of a young woman of affluence, in her novel Daniel Deronda. Comparing and contrasting this kind of ignorance with James Mill's avowed ignorance of local tradition and ...

  15. On the Rationality of Pluralistic Ignorance

    DEFF Research Database (Denmark)

    Bjerring, Jens Christian Krarup; Hansen, Jens Ulrik; Pedersen, Nikolaj Jang Lee Linding

    2014-01-01

    Pluralistic ignorance is a socio-psychological phenomenon that involves a systematic discrepancy between people’s private beliefs and public behavior in cer- tain social contexts. Recently, pluralistic ignorance has gained increased attention in formal and social epistemology. But to get clear...

  16. Ignorance, information and autonomy.

    Science.gov (United States)

    Harris, J; Keywood, K

    2001-09-01

    People have a powerful interest in genetic privacy and its associated claim to ignorance, and some equally powerful desires to be shielded from disturbing information are often voiced. We argue, however, that there is no such thing as a right to remain in ignorance, where a fight is understood as an entitlement that trumps competing claims. This does not of course mean that information must always be forced upon unwilling recipients, only that there is no prima facie entitlement to be protected from true or honest information about oneself. Any claims to be shielded from information about the self must compete on equal terms with claims based in the rights and interests of others. In balancing the weight and importance of rival considerations about giving or withholding information, if rights claims have any place, rights are more likely to be defensible on the side of honest communication of information rather than in defence of ignorance. The right to free speech and the right to decline to accept responsibility to take decisions for others imposed by those others seem to us more plausible candidates for fully fledged rights in this field than any purported right to ignorance. Finally, and most importantly, if the right to autonomy is invoked, a proper understanding of the distinction between claims to liberty and claims to autonomy show that the principle of autonomy, as it is understood in contemporary social ethics and English law, supports the giving rather than the withholding of information in most circumstances.

  17. Learning to ignore: acquisition of sustained attentional suppression.

    Science.gov (United States)

    Dixon, Matthew L; Ruppel, Justin; Pratt, Jay; De Rosa, Eve

    2009-04-01

    We examined whether the selection mechanisms committed to the suppression of ignored stimuli can be modified by experience to produce a sustained, rather than transient, change in behavior. Subjects repeatedly ignored the shape of stimuli, while attending to their color. On subsequent attention to shape, there was a robust and sustained decrement in performance that was selective to when shape was ignored across multiple-color-target contexts, relative to a single-color-target context. Thus, amount of time ignored was not sufficient to induce a sustained performance decrement. Moreover, in this group, individual differences in initial color target selection were associated with the subsequent performance decrement when attending to previously ignored stimuli. Accompanying this sustained decrement in performance was a transfer in the locus of suppression from an exemplar (e.g., a circle) to a feature (i.e., shape) level of representation. These data suggest that learning can influence attentional selection by sustained attentional suppression of ignored stimuli.

  18. The virtues of ignorance.

    Science.gov (United States)

    Son, Lisa K; Kornell, Nate

    2010-02-01

    Although ignorance and uncertainty are usually unwelcome feelings, they have unintuitive advantages for both human and non-human animals, which we review here. We begin with the perils of too much information: expertise and knowledge can come with illusions (and delusions) of knowing. We then describe how withholding information can counteract these perils: providing people with less information enables them to judge more precisely what they know and do not know, which in turn enhances long-term memory. Data are presented from a new experiment that illustrates how knowing what we do not know can result in helpful choices and enhanced learning. We conclude by showing that ignorance can be a virtue, as long as it is recognized and rectified. Copyright 2009 Elsevier B.V. All rights reserved.

  19. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    Energy Technology Data Exchange (ETDEWEB)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com [Wageningen University, P.O. Box 338, Wageningen 6700 AH (Netherlands); Heijungs, R. [Vrije Universiteit Amsterdam, De Boelelaan 1105, Amsterdam 1081 HV (Netherlands); Leiden University, Einsteinweg 2, Leiden 2333 CC (Netherlands)

    2017-01-15

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlations between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.

  20. On strategic ignorance of environmental harm and social norms

    DEFF Research Database (Denmark)

    Thunström, Linda; van 't Veld, Klaas; Shogren, Jason

    , and that they use ignorance as an excuse to engage in less pro-environmental behavior. It also predicts that the cost of ignorance increases if people can learn about the social norm from the information. We test the model predictions empirically with an experiment that involves an imaginary long- distance flight...... and an option to buy offsets for the flight’s carbon footprint. More than half (53 percent) of the subjects choose to ignore information on the carbon footprint alone before deciding their offset purchase, but ignorance significantly decreases (to 29 percent) when the information additionally reveals the social...

  1. DMPD: TLR ignores methylated RNA? [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 16111629 TLR ignores methylated RNA? Ishii KJ, Akira S. Immunity. 2005 Aug;23(2):11...1-3. (.png) (.svg) (.html) (.csml) Show TLR ignores methylated RNA? PubmedID 16111629 Title TLR ignores methylated

  2. Knowledge, responsibility, decision making and ignorance

    DEFF Research Database (Denmark)

    Huniche, Lotte

    2001-01-01

    of and ignoring) seems to be commonly applicable to describing persons living at risk for Huntington´s Disease (HD). So what does everyday conduct of life look like from an "ignorance" perspective? And how can we discuss and argue about morality and ethics taking these seemingly diverse ways of living at risk...... into account? Posing this question, I hope to contribute to new reflections on possibilities and constraints in people´s lives with HD as well as in research and to open up new ways of discussing "right and wrong"....

  3. On strategic ignorance of environmental harm and social norms

    DEFF Research Database (Denmark)

    Thunström, Linda; van’t Veld, Klaas; Shogren, Jason. F.

    2014-01-01

    decreases (to 29 percent) when the information additionally reveals the share of air travelers who buy carbon offsets. We find evidence that some people use ignorance as an excuse to reduce pro-environmental behavior—ignorance significantly decreases the probability of buying carbon offsets.......Are people strategically ignorant of the negative externalities their activities cause the environment? Herein we examine if people avoid costless information on those externalities and use ignorance as an excuse to reduce pro-environmental behavior. We develop a theoretical framework in which...... people feel internal pressure (“guilt”) from causing harm to the environment (e.g., emitting carbon dioxide) as well as external pressure to conform to the social norm for pro-environmental behavior (e.g., offsetting carbon emissions). Our model predicts that people may benefit from avoiding information...

  4. Ignorance-Based Instruction in Higher Education.

    Science.gov (United States)

    Stocking, S. Holly

    1992-01-01

    Describes how three groups of educators (in a medical school, a psychology department, and a journalism school) are helping instructors and students to recognize, manage, and use ignorance to promote learning. (SR)

  5. Mid-adolescent neurocognitive development of ignoring and attending emotional stimuli

    Directory of Open Access Journals (Sweden)

    Nora C. Vetter

    2015-08-01

    Full Text Available Appropriate reactions toward emotional stimuli depend on the distribution of prefrontal attentional resources. In mid-adolescence, prefrontal top-down control systems are less engaged, while subcortical bottom-up emotional systems are more engaged. We used functional magnetic resonance imaging to follow the neural development of attentional distribution, i.e. attending versus ignoring emotional stimuli, in adolescence. 144 healthy adolescents were studied longitudinally at age 14 and 16 while performing a perceptual discrimination task. Participants viewed two pairs of stimuli – one emotional, one abstract – and reported on one pair whether the items were the same or different, while ignoring the other pair. Hence, two experimental conditions were created: “attending emotion/ignoring abstract” and “ignoring emotion/attending abstract”. Emotional valence varied between negative, positive, and neutral. Across conditions, reaction times and error rates decreased and activation in the anterior cingulate and inferior frontal gyrus increased from age 14 to 16. In contrast, subcortical regions showed no developmental effect. Activation of the anterior insula increased across ages for attending positive and ignoring negative emotions. Results suggest an ongoing development of prefrontal top-down resources elicited by emotional attention from age 14 to 16 while activity of subcortical regions representing bottom-up processing remains stable.

  6. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    Science.gov (United States)

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  7. Fault-ignorant quantum search

    International Nuclear Information System (INIS)

    Vrana, Péter; Reeb, David; Reitzner, Daniel; Wolf, Michael M

    2014-01-01

    We investigate the problem of quantum searching on a noisy quantum computer. Taking a fault-ignorant approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm. (paper)

  8. The concept of ignorance in a risk assessment and risk management context

    International Nuclear Information System (INIS)

    Aven, T.; Steen, R.

    2010-01-01

    There are many definitions of ignorance in the context of risk assessment and risk management. Most refer to situations in which there are lack of knowledge, poor basis for probability assignments and possible outcomes not (fully) known. The purpose of this paper is to discuss the ignorance concept in this setting. Based on a set of risk and uncertainty features, we establish conceptual structures characterising the level of ignorance. These features include the definition of chances (relative frequency-interpreted probabilities) and the existence of scientific uncertainties. Based on these structures, we suggest a definition of ignorance linked to scientific uncertainties, i.e. the lack of understanding of how consequences of the activity are influenced by the underlying factors. In this way, ignorance can be viewed as a condition for applying the precautionary principle. The discussion is also linked to the use and boundaries of risk assessments in the case of large uncertainties, and the methods for classifying risk and uncertainty problems.

  9. What Is Hospitality in the Academy? Epistemic Ignorance and the (Im)possible Gift

    Science.gov (United States)

    Kuokkanen, Rauna

    2008-01-01

    The academy is considered by many as the major Western institution of knowledge. This article, however, argues that the academy is characterized by prevalent "epistemic ignorance"--a concept informed by Spivak's discussion of "sanctioned ignorance." Epistemic ignorance refers to academic practices and discourses that enable the continued exclusion…

  10. Experiences of Being Ignored by Peers during Late Adolescence: Linkages to Psychological Maladjustment

    Science.gov (United States)

    Bowker, Julie C.; Adams, Ryan E.; Fredstrom, Bridget K.; Gilman, Rich

    2014-01-01

    In this study on being ignored by peers, 934 twelfth-grade students reported on their experiences of being ignored, victimized, and socially withdrawn, and completed measures of friendship and psychological adjustment (depression, self-esteem, and global satisfaction). Peer nominations of being ignored, victimized, and accepted by peers were also…

  11. Modelling non-ignorable missing data mechanisms with item response theory models

    NARCIS (Netherlands)

    Holman, Rebecca; Glas, Cornelis A.W.

    2005-01-01

    A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled

  12. Modelling non-ignorable missing-data mechanisms with item response theory models

    NARCIS (Netherlands)

    Holman, Rebecca; Glas, Cees A. W.

    2005-01-01

    A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled

  13. DFT study of the mechanism and stereoselectivity of the 1,3-dipolar ...

    Indian Academy of Sciences (India)

    and methyl acrylate) using DFT method. An ana- lysis of ..... field (SCRF)30,46 model based on the polarizable con- tinuum model (PCM) of Tomasi's group47 have been applied. ... stereoselectivity relative to the gas-phase since the trends of ...

  14. Should general psychiatry ignore somatization and hypochondriasis?

    Science.gov (United States)

    Creed, Francis

    2006-10-01

    This paper examines the tendency for general psychiatry to ignore somatization and hypochondriasis. These disorders are rarely included in national surveys of mental health and are not usually regarded as a concern of general psychiatrists; yet primary care doctors and other physicians often feel let down by psychiatry's failure to offer help in this area of medical practice. Many psychiatrists are unaware of the suffering, impaired function and high costs that can result from these disorders, because these occur mainly within primary care and secondary medical services. Difficulties in diagnosis and a tendency to regard them as purely secondary phenomena of depression, anxiety and related disorders mean that general psychiatry may continue to ignore somatization and hypochondriasis. If general psychiatry embraced these disorders more fully, however, it might lead to better prevention and treatment of depression as well as helping to prevent the severe disability that may arise in association with these disorders.

  15. Inattentional blindness for ignored words: comparison of explicit and implicit memory tasks.

    Science.gov (United States)

    Butler, Beverly C; Klein, Raymond

    2009-09-01

    Inattentional blindness is described as the failure to perceive a supra-threshold stimulus when attention is directed away from that stimulus. Based on performance on an explicit recognition memory test and concurrent functional imaging data Rees, Russell, Frith, and Driver [Rees, G., Russell, C., Frith, C. D., & Driver, J. (1999). Inattentional blindness versus inattentional amnesia for fixated but ignored words. Science, 286, 2504-2507] reported inattentional blindness for word stimuli that were fixated but ignored. The present study examined both explicit and implicit memory for fixated but ignored words using a selective-attention task in which overlapping picture/word stimuli were presented at fixation. No explicit awareness of the unattended words was apparent on a recognition memory test. Analysis of an implicit memory task, however, indicated that unattended words were perceived at a perceptual level. Thus, the selective-attention task did not result in perfect filtering as suggested by Rees et al. While there was no evidence of conscious perception, subjects were not blind to the implicit perceptual properties of fixated but ignored words.

  16. The Mathematical Miseducation of America's Youth: Ignoring Research and Scientific Study in Education.

    Science.gov (United States)

    Battista, Michael T.

    1999-01-01

    Because traditional instruction ignores students' personal construction of mathematical meaning, mathematical thought development is not properly nurtured. Several issues must be addressed, including adults' ignorance of math- and student-learning processes, identification of math-education research specialists, the myth of coverage, testing…

  17. Understanding Selective Downregulation of c-Myc Expression through Inhibition of General Transcription Regulators in Multiple Myeloma

    Science.gov (United States)

    2015-06-01

    We next tested whether BET bromodomain inhibition mitigated the acti- vation of proadhesion pathways in aortic endothelium, which oc- curs during the...tinuum of activity as Myc flickers on and off of weakly bound, weakly expressed promoters, but stays longer or more frequently at high output promoters

  18. Investigating Deviance Distraction and the Impact of the Modality of the To-Be-Ignored Stimuli.

    Science.gov (United States)

    Marsja, Erik; Neely, Gregory; Ljungberg, Jessica K

    2018-03-01

    It has been suggested that deviance distraction is caused by unexpected sensory events in the to-be-ignored stimuli violating the cognitive system's predictions of incoming stimuli. The majority of research has used methods where the to-be-ignored expected (standards) and the unexpected (deviants) stimuli are presented within the same modality. Less is known about the behavioral impact of deviance distraction when the to-be-ignored stimuli are presented in different modalities (e.g., standard and deviants presented in different modalities). In three experiments using cross-modal oddball tasks with mixed-modality to-be-ignored stimuli, we examined the distractive role of unexpected auditory deviants presented in a continuous stream of expected standard vibrations. The results showed that deviance distraction seems to be dependent upon the to-be-ignored stimuli being presented within the same modality, and that the simplest omission of something expected; in this case, a standard vibration may be enough to capture attention and distract performance.

  19. Is There Such a Thing as 'White Ignorance' in British Education?

    Science.gov (United States)

    Bain, Zara

    2018-01-01

    I argue that political philosopher Charles W. Mills' twin concepts of 'the epistemology of ignorance' and 'white ignorance' are useful tools for thinking through racial injustice in the British education system. While anti-racist work in British education has a long history, racism persists in British primary, secondary and tertiary education. For…

  20. Non-ignorable missingness item response theory models for choice effects in examinee-selected items.

    Science.gov (United States)

    Liu, Chen-Wei; Wang, Wen-Chung

    2017-11-01

    Examinee-selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non-ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two-dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non-ignorable and to determine how to apply the new model to the data collected. Two follow-up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non-ignorable missing data were mistakenly treated as ignorable. © 2017 The British Psychological Society.

  1. Can Strategic Ignorance Explain the Evolution of Love?

    Science.gov (United States)

    Bear, Adam; Rand, David G

    2018-04-24

    People's devotion to, and love for, their romantic partners poses an evolutionary puzzle: Why is it better to stop your search for other partners once you enter a serious relationship when you could continue to search for somebody better? A recent formal model based on "strategic ignorance" suggests that such behavior can be adaptive and favored by natural selection, so long as you can signal your unwillingness to "look" for other potential mates to your current partner. Here, we re-examine this conclusion with a more detailed model designed to capture specific features of romantic relationships. We find, surprisingly, that devotion does not typically evolve in our model: Selection favors agents who choose to "look" while in relationships and who allow their partners to do the same. Non-looking is only expected to evolve if there is an extremely large cost associated with being left by your partner. Our results therefore raise questions about the role of strategic ignorance in explaining the evolution of love. Copyright © 2018 Cognitive Science Society, Inc.

  2. The end of ignorance multiplying our human potential

    CERN Document Server

    Mighton, John

    2008-01-01

    A revolutionary call for a new understanding of how people learn. The End of Ignorance conceives of a world in which no child is left behind – a world based on the assumption that each child has the potential to be successful in every subject. John Mighton argues that by recognizing the barriers that we have experienced in our own educational development, by identifying the moment that we became disenchanted with a certain subject and forever closed ourselves off to it, we will be able to eliminate these same barriers from standing in the way of our children. A passionate examination of our present education system, The End of Ignorance shows how we all can work together to reinvent the way that we are taught. John Mighton, the author of The Myth of Ability, is the founder of JUMP Math, a system of learning based on the fostering of emergent intelligence. The program has proved so successful an entire class of Grade 3 students, including so-called slow learners, scored over 90% on a Grade 6 math test. A ...

  3. Should general psychiatry ignore somatization and hypochondriasis?

    OpenAIRE

    CREED, FRANCIS

    2006-01-01

    This paper examines the tendency for general psychiatry to ignore somatization and hypochondriasis. These disorders are rarely included in national surveys of mental health and are not usually regarded as a concern of general psychiatrists; yet primary care doctors and other physicians often feel let down by psychiatry's failure to offer help in this area of medical practice. Many psychiatrists are unaware of the suffering, impaired function and high costs that can result fr...

  4. Parabolic approximation method for fast magnetosonic wave propagation in tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Perkins, F.W.; Hwang, D.Q.

    1985-07-01

    Fast magnetosonic wave propagation in a cylindrical tokamak model is studied using a parabolic approximation method in which poloidal variations of the wave field are considered weak in comparison to the radial variations. Diffraction effects, which are ignored by ray tracing mthods, are included self-consistently using the parabolic method since continuous representations for the wave electromagnetic fields are computed directly. Numerical results are presented which illustrate the cylindrical convergence of the launched waves into a diffraction-limited focal spot on the cyclotron absorption layer near the magnetic axis for a wide range of plasma confinement parameters

  5. Evaluation of the successive approximations method for acoustic streaming numerical simulations.

    Science.gov (United States)

    Catarino, S O; Minas, G; Miranda, J M

    2016-05-01

    This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.

  6. Professional orientation and pluralistic ignorance among jail correctional officers.

    Science.gov (United States)

    Cook, Carrie L; Lane, Jodi

    2014-06-01

    Research about the attitudes and beliefs of correctional officers has historically been conducted in prison facilities while ignoring jail settings. This study contributes to our understanding of correctional officers by examining the perceptions of those who work in jails, specifically measuring professional orientations about counseling roles, punitiveness, corruption of authority by inmates, and social distance from inmates. The study also examines whether officers are accurate in estimating these same perceptions of their peers, a line of inquiry that has been relatively ignored. Findings indicate that the sample was concerned about various aspects of their job and the management of inmates. Specifically, officers were uncertain about adopting counseling roles, were somewhat punitive, and were concerned both with maintaining social distance from inmates and with an inmate's ability to corrupt their authority. Officers also misperceived the professional orientation of their fellow officers and assumed their peer group to be less progressive than they actually were.

  7. 'More is less'. The tax effects of ignoring flow externalities

    International Nuclear Information System (INIS)

    Sandal, Leif K.; Steinshamn, Stein Ivar; Grafton, R. Quentin

    2003-01-01

    Using a model of non-linear, non-monotone decay of the stock pollutant, and starting from the same initial conditions, the paper shows that an optimal tax that corrects for both stock and flow externalities may result in a lower tax, fewer cumulative emissions (less decay in emissions) and higher output at the steady state than a corrective tax that ignores the flow externality. This 'more is less' result emphasizes that setting a corrective tax that ignores the flow externality, or imposing a corrective tax at too low a level where there exists only a stock externality, may affect both transitory and steady-state output, tax payments and cumulative emissions. The result has important policy implications for decision makers setting optimal corrective taxes and targeted emission limits whenever stock externalities exist

  8. Persistence of Memory for Ignored Lists of Digits: Areas of Developmental Constancy and Change.

    Science.gov (United States)

    Cowan, Nelson; Nugent, Lara D.; Elliott, Emily M.; Saults, J. Scott

    2000-01-01

    Examined persistence of sensory memory by studying developmental differences in recall of attended and ignored lists of digits for second-graders, fifth-graders, and adults. Found developmental increase in the persistence of memory only for the final item in an ignored list, which is the item for which sensory memory is thought to be the most…

  9. Willful Ignorance and the Death Knell of Critical Thought

    Science.gov (United States)

    Rubin, Daniel Ian

    2018-01-01

    Independent, critical thought has never been more important in the United States. In the Age of Trump, political officials spout falsehoods called "alternative facts" as if they were on equal footing with researchable, scientific data. At the same time, an unquestioning populace engages in acts of "willful ignorance" on a daily…

  10. Deep-inelastic structure functions in an approximation to the bag theory

    International Nuclear Information System (INIS)

    Jaffe, R.L.

    1975-01-01

    A cavity approximation to the bag theory developed earlier is extended to the treatment of forward virtual Compton scattering. In the Bjorken limit and for small values of ω (ω = vertical-bar2p center-dot q/q 2 vertical-bar) it is argued that the operator nature of the bag boundaries might be ignored. Structure functions are calculated in one and three dimensions. Bjorken scaling is obtained. The model provides a realization of light-cone current algebra and possesses a parton interpretation. The structure functions show a quasielastic peak. The spreading of the structure functions about the peak is associated with confinement. As expected, Regge behavior is not obtained for large ω. The ''momentum sum rule'' is saturated, indicating that the hadron's charged constituents carry all the momentum in this model. νW/subL/ is found to scale and is calculable. Application of the model to the calculation of spin-dependent and chiral-symmetry--violating structure functions is proposed. The nature of the intermediate states in this approximation is discussed. Problems associated with the cavity approximation are also discussed

  11. Traffic forecasts ignoring induced demand

    DEFF Research Database (Denmark)

    Næss, Petter; Nicolaisen, Morten Skou; Strand, Arvid

    2012-01-01

    the model calculations included only a part of the induced traffic, the difference in cost-benefit results compared to the model excluding all induced traffic was substantial. The results show lower travel time savings, more adverse environmental impacts and a considerably lower benefitcost ratio when...... induced traffic is partly accounted for than when it is ignored. By exaggerating the economic benefits of road capacity increase and underestimating its negative effects, omission of induced traffic can result in over-allocation of public money on road construction and correspondingly less focus on other...... performance of a proposed road project in Copenhagen with and without short-term induced traffic included in the transport model. The available transport model was not able to include long-term induced traffic resulting from changes in land use and in the level of service of public transport. Even though...

  12. Egoism, ignorance and choice : on society's lethal infection

    OpenAIRE

    Camilleri, Jonathan

    2015-01-01

    The ability to choose and our innate selfish, or rather, self-preservative urges are a recipe for disaster. Combining this with man's ignorance by definition and especially his general refusal to accept it, inevitably leads to Man's demise as a species. It is our false notion of freedom which contributes directly to our collective death, and therefore, man's trying to escape death is, in the largest of ways, counterproductive.

  13. The importance of ignoring: Alpha oscillations protect selectivity

    OpenAIRE

    Payne, Lisa; Sekuler, Robert

    2014-01-01

    Selective attention is often thought to entail an enhancement of some task-relevant stimulus or attribute. We discuss the perspective that ignoring irrelevant, distracting information plays a complementary role in information processing. Cortical oscillations within the alpha (8–14 Hz) frequency band have emerged as a marker of sensory suppression. This suppression is linked to selective attention for visual, auditory, somatic, and verbal stimuli. Inhibiting processing of irrelevant input mak...

  14. Tunnel Vision: New England Higher Education Ignores Demographic Peril

    Science.gov (United States)

    Hodgkinson, Harold L.

    2004-01-01

    This author states that American higher education ignores about 90 percent of the environment in which it operates. Colleges change admissions requirements without even informing high schools in their service areas. Community college graduates are denied access to four-year programs because of policy changes made only after it was too late for the…

  15. Introduction to Methods of Approximation in Physics and Astronomy

    Science.gov (United States)

    van Putten, Maurice H. P. M.

    2017-04-01

    secular behavior. For instance, secular evolution of orbital parameters may derive from averaging over essentially periodic behavior on relatively short, orbital periods. When the original number of degrees of freedom is large, averaging over dynamical time scales may lead to a formulation in terms of a system in approximately thermodynamic equilibrium subject to evolution on a secular time scale by a regular or singular perturbation. In modern astrophysics and cosmology, gravitation is being probed across an increasingly broad range of scales and more accurately so than ever before. These observations probe weak gravitational interactions below what is encountered in our solar system by many orders of magnitude. These observations hereby probe (curved) spacetime at low energy scales that may reveal novel properties hitherto unanticipated in the classical vacuum of Newtonian mechanics and Minkowski spacetime. Dark energy and dark matter encountered on the scales of galaxies and beyond, therefore, may be, in part, revealing our ignorance of the vacuum at the lowest energy scales encountered in cosmology. In this context, our application of Newtonian mechanics to globular clusters, galaxies and cosmology is an approximation assuming a classical vacuum, ignoring the potential for hidden low energy scales emerging on cosmological scales. Given our ignorance of the latter, this poses a challenge in the potential for unknown systematic deviations. If of quantum mechanical origin, such deviations are often referred to as anomalies. While they are small in traditional, macroscopic Newtonian experiments in the laboratory, they same is not a given in the limit of arbitrarily weak gravitational interactions. We hope this selection of introductory material is useful and kindles the reader's interest to become a creative member of modern astrophysics and cosmology.

  16. Ignorance Is Bliss, But for Whom? The Persistent Effect of Good Will on Cooperation

    Directory of Open Access Journals (Sweden)

    Mike Farjam

    2016-10-01

    Full Text Available Who benefits from the ignorance of others? We address this question from the point of view of a policy maker who can induce some ignorance into a system of agents competing for resources. Evolutionary game theory shows that when unconditional cooperators or ignorant agents compete with defectors in two-strategy settings, unconditional cooperators get exploited and are rendered extinct. In contrast, conditional cooperators, by utilizing some kind of reciprocity, are able to survive and sustain cooperation when competing with defectors. We study how cooperation thrives in a three-strategy setting where there are unconditional cooperators, conditional cooperators and defectors. By means of simulation on various kinds of graphs, we show that conditional cooperators benefit from the existence of unconditional cooperators in the majority of cases. However, in worlds that make cooperation hard to evolve, defectors benefit.

  17. Crimes commited by indigeno us people in ignorance of the law

    Directory of Open Access Journals (Sweden)

    Diego Fernando Chimbo Villacorte

    2017-07-01

    Full Text Available This analysis focuses specifically When the Indian commits crimes in ignorance of the law, not only because it ignores absolutely the unlawfulness of their conduct but when he believes he is acting in strict accordance with their beliefs and ancestral customs which –in squabble some cases– with positive law. Likewise the impossibility of imposing a penalty –when the offense is committed outside the community– or indigenous purification –when it marks an act that disturbs social peace within the indigenous– community is committed but mainly focuses on the impossibility to impose a security measure when it has committed a crime outside their community, because doing so is as unimpeachable and returns to his community, generating a discriminating treatment that prevents the culturally different self-determination.

  18. The Marley hypothesis: denial of racism reflects ignorance of history.

    Science.gov (United States)

    Nelson, Jessica C; Adams, Glenn; Salter, Phia S

    2013-02-01

    This study used a signal detection paradigm to explore the Marley hypothesis--that group differences in perception of racism reflect dominant-group denial of and ignorance about the extent of past racism. White American students from a midwestern university and Black American students from two historically Black universities completed surveys about their historical knowledge and perception of racism. Relative to Black participants, White participants perceived less racism in both isolated incidents and systemic manifestations of racism. They also performed worse on a measure of historical knowledge (i.e., they did not discriminate historical fact from fiction), and this group difference in historical knowledge mediated the differences in perception of racism. Racial identity relevance moderated group differences in perception of systemic manifestations of racism (but not isolated incidents), such that group differences were stronger among participants who scored higher on a measure of racial identity relevance. The results help illuminate the importance of epistemologies of ignorance: cultural-psychological tools that afford denial of and inaction about injustice.

  19. Exploitation of commercial remote sensing images: reality ignored?

    Science.gov (United States)

    Allen, Paul C.

    1999-12-01

    The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.

  20. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    Science.gov (United States)

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2017-07-01

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 (22α̂)0.50 for 0.020.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 Poisson model dose-response curve. © 2016 Society for Risk Analysis.

  1. An improved coupled-states approximation including the nearest neighbor Coriolis couplings for diatom-diatom inelastic collision

    Science.gov (United States)

    Yang, Dongzheng; Hu, Xixi; Zhang, Dong H.; Xie, Daiqian

    2018-02-01

    Solving the time-independent close coupling equations of a diatom-diatom inelastic collision system by using the rigorous close-coupling approach is numerically difficult because of its expensive matrix manipulation. The coupled-states approximation decouples the centrifugal matrix by neglecting the important Coriolis couplings completely. In this work, a new approximation method based on the coupled-states approximation is presented and applied to time-independent quantum dynamic calculations. This approach only considers the most important Coriolis coupling with the nearest neighbors and ignores weaker Coriolis couplings with farther K channels. As a result, it reduces the computational costs without a significant loss of accuracy. Numerical tests for para-H2+ortho-H2 and para-H2+HD inelastic collision were carried out and the results showed that the improved method dramatically reduces the errors due to the neglect of the Coriolis couplings in the coupled-states approximation. This strategy should be useful in quantum dynamics of other systems.

  2. On uncertainty in information and ignorance in knowledge

    Science.gov (United States)

    Ayyub, Bilal M.

    2010-05-01

    This paper provides an overview of working definitions of knowledge, ignorance, information and uncertainty and summarises formalised philosophical and mathematical framework for their analyses. It provides a comparative examination of the generalised information theory and the generalised theory of uncertainty. It summarises foundational bases for assessing the reliability of knowledge constructed as a collective set of justified true beliefs. It discusses system complexity for ancestor simulation potentials. It offers value-driven communication means of knowledge and contrarian knowledge using memes and memetics.

  3. Maggots in the Brain: Sequelae of Ignored Scalp Wound.

    Science.gov (United States)

    Aggarwal, Ashish; Maskara, Prasant

    2018-01-01

    A 26-year-old male had suffered a burn injury to his scalp in childhood and ignored it. He presented with a complaint of something crawling on his head. Inspection of his scalp revealed multiple maggots on the brain surface with erosion of overlying bone and scalp. He was successfully managed by surgical debridement and regular dressing. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Ignoring alarming news brings indifference: Learning about the world and the self.

    Science.gov (United States)

    Paluck, Elizabeth Levy; Shafir, Eldar; Wu, Sherry Jueyu

    2017-10-01

    The broadcast of media reports about moral crises such as famine can subtly depress rather than activate moral concern. Whereas much research has examined the effects of media reports that people attend to, social psychological analysis suggests that what goes unattended can also have an impact. We test the idea that when vivid news accounts of human suffering are broadcast in the background but ignored, people infer from their choice to ignore these accounts that they care less about the issue, compared to those who pay attention and even to those who were not exposed. Consistent with research on self-perception and attribution, three experiments demonstrate that participants who were nudged to distract themselves in front of a television news program about famine in Niger (Study 1), or to skip an online promotional video for the Niger famine program (Study 2), or who chose to ignore the famine in Niger television program in more naturalistic settings (Study 3) all assigned lower importance to poverty and to hunger reduction compared to participants who watched with no distraction or opportunity to skip the program, or to those who did not watch at all. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. The Ignorant Facilitator: Education, Politics and Theatre in Co-Communities

    Science.gov (United States)

    Lev-Aladgem, Shulamith

    2015-01-01

    This article discusses the book "The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation" by the French philosopher, Jacques Rancière. Its intention is to study the potential contribution of this text to the discourse of applied theatre (theatre in co-communities) in general, and the role of the facilitator in particular. It…

  6. Born approximation to a perturbative numerical method for the solution of the Schroedinger equation

    International Nuclear Information System (INIS)

    Adam, Gh.

    1978-01-01

    A step function perturbative numerical method (SF-PN method) is developed for the solution of the Cauchy problem for the second order liniar differential equation in normal form. An important point stressed in the present paper, which seems to have been previously ignored in the literature devoted to the PN methods, is the close connection between the first order perturbation theory of the PN approach and the wellknown Born approximation, and, in general, the connection between the varjous orders of the PN corrections and the Neumann series. (author)

  7. Roles of dark energy perturbations in dynamical dark energy models: can we ignore them?

    Science.gov (United States)

    Park, Chan-Gyung; Hwang, Jai-chan; Lee, Jae-heon; Noh, Hyerim

    2009-10-09

    We show the importance of properly including the perturbations of the dark energy component in the dynamical dark energy models based on a scalar field and modified gravity theories in order to meet with present and future observational precisions. Based on a simple scaling scalar field dark energy model, we show that observationally distinguishable substantial differences appear by ignoring the dark energy perturbation. By ignoring it the perturbed system of equations becomes inconsistent and deviations in (gauge-invariant) power spectra depend on the gauge choice.

  8. The effects of systemic crises when investors can be crisis ignorant

    NARCIS (Netherlands)

    H.J.W.G. Kole (Erik); C.G. Koedijk (Kees); M.J.C.M. Verbeek (Marno)

    2004-01-01

    textabstractSystemic crises can largely affect asset allocations due to the rapid deterioration of the risk-return trade-off. We investigate the effects of systemic crises, interpreted as global simultaneous shocks to financial markets, by introducing an investor adopting a crisis ignorant or crisis

  9. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2012-05-01

    Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.

  10. Forgotten and Ignored: Special Education in First Nations Schools in Canada

    Science.gov (United States)

    Phillips, Ron

    2010-01-01

    Usually reviews of special education in Canada describe the special education programs, services, policies, and legislation that are provided by the provinces and territories. The reviews consistently ignore the special education programs, services, policies, and legislation that are provided by federal government of Canada. The federal government…

  11. Geographies of knowing, geographies of ignorance: jumping scale in Southeast Asia

    NARCIS (Netherlands)

    van Schendel, W.

    2002-01-01

    'Area studies' use a geographical metaphor to visualise and naturalise particular social spaces as well as a particular scale of analysis. They produce specific geographies of knowing but also create geographies of ignorance. Taking Southeast Asia as an example, in this paper I explore how areas are

  12. Early humans' egalitarian politics: runaway synergistic competition under an adapted veil of ignorance.

    Science.gov (United States)

    Harvey, Marc

    2014-09-01

    This paper proposes a model of human uniqueness based on an unusual distinction between two contrasted kinds of political competition and political status: (1) antagonistic competition, in quest of dominance (antagonistic status), a zero-sum, self-limiting game whose stake--who takes what, when, how--summarizes a classical definition of politics (Lasswell 1936), and (2) synergistic competition, in quest of merit (synergistic status), a positive-sum, self-reinforcing game whose stake becomes "who brings what to a team's common good." In this view, Rawls's (1971) famous virtual "veil of ignorance" mainly conceals politics' antagonistic stakes so as to devise the principles of a just, egalitarian society, yet without providing any means to enforce these ideals (Sen 2009). Instead, this paper proposes that human uniqueness flourished under a real "adapted veil of ignorance" concealing the steady inflation of synergistic politics which resulted from early humans' sturdy egalitarianism. This proposition divides into four parts: (1) early humans first stumbled on a purely cultural means to enforce a unique kind of within-team antagonistic equality--dyadic balanced deterrence thanks to handheld weapons (Chapais 2008); (2) this cultural innovation is thus closely tied to humans' darkest side, but it also launched the cumulative evolution of humans' brightest qualities--egalitarian team synergy and solidarity, together with the associated synergistic intelligence, culture, and communications; (3) runaway synergistic competition for differential merit among antagonistically equal obligate teammates is the single politically selective mechanism behind the cumulative evolution of all these brighter qualities, but numerous factors to be clarified here conceal this mighty evolutionary driver; (4) this veil of ignorance persists today, which explains why humans' unique prosocial capacities are still not clearly understood by science. The purpose of this paper is to start lifting

  13. Lessons in Equality: From Ignorant Schoolmaster to Chinese Aesthetics

    Directory of Open Access Journals (Sweden)

    Ernest Ženko

    2017-09-01

    Full Text Available The postponement of equality is not only a recurring topic in Jacques Rancière’s writings, but also the most defining feature of modern Chinese aesthetics. Particularly in the period after 1980’s, when the country opened its doors to Western ideas, Chinese aesthetics extensively played a subordinate role in an imbalanced knowledge transfer, in which structural inequality was only reinforced. Aesthetics in China plays an important role and is expected not only to interpret literature and art, but also to help building a harmonious society within globalized world. This is the reason why some commentators – Wang Jianjiang being one of them – point out that it is of utmost importance to eliminate this imbalance and develop proper Chinese aesthetics. Since the key issue in this development is the problem of inequality, an approach developed by Jacques Rancière, “the philosopher of equality”, is proposed. Even though Rancière wrote extensively about literature, art and aesthetics, in order to confront the problem of Chinese aesthetics, it seems that a different approach, found in his repertoire, could prove to be more fruitful. In 1987, he published a book titled The Ignorant Schoolmaster, which contributed to his ongoing philosophical emancipatory project, and focused on inequality and its conditions in the realm of education. The Ignorant Schoolmaster, nonetheless, stretches far beyond the walls of classroom or even educational system, and brings to the fore political implications that cluster around the fundamental core of Rancière's political philosophy: the definition of politics as the verification of the presupposition of the equality of intelligence. Equality cannot be postponed as a goal to be only attained in the future and, therefore, has to be considered as a premise of egalitarian politics that needs to operate as a presupposition.   Article received: May 21, 2017; Article accepted: May 28, 2017; Published online

  14. An Oblivious O(1)-Approximation for Single Source Buy-at-Bulk

    KAUST Repository

    Goel, Ashish

    2009-10-01

    We consider the single-source (or single-sink) buy-at-bulk problem with an unknown concave cost function. We want to route a set of demands along a graph to or from a designated root node, and the cost of routing x units of flow along an edge is proportional to some concave, non-decreasing function f such that f(0) = 0. We present a polynomial time algorithm that finds a distribution over trees such that the expected cost of a tree for any f is within an O(1)-factor of the optimum cost for that f. The previous best simultaneous approximation for this problem, even ignoring computation time, was O(log |D|), where D is the multi-set of demand nodes. We design a simple algorithmic framework using the ellipsoid method that finds an O(1)-approximation if one exists, and then construct a separation oracle using a novel adaptation of the Guha, Meyerson, and Munagala [10] algorithm for the single-sink buy-at-bulk problem that proves an O(1) approximation is possible for all f. The number of trees in the support of the distribution constructed by our algorithm is at most 1 + log |D|. © 2009 IEEE.

  15. Mathematical Practice as Sculpture of Utopia: Models, Ignorance, and the Emancipated Spectator

    Science.gov (United States)

    Appelbaum, Peter

    2012-01-01

    This article uses Ranciere's notion of the ignorant schoolmaster and McElheny's differentiation of artist's models from those of the architect and scientist to propose the reconceptualization of mathematics education as the support of emancipated spectators and sculptors of utopia.

  16. Issues ignored in laboratory quality surveillance

    International Nuclear Information System (INIS)

    Zeng Jing; Li Xingyuan; Zhang Tingsheng

    2008-01-01

    According to the work requirement of the related laboratory quality surveillance in ISO17025, this paper analyzed and discussed the issued ignored in the laboratory quality surveillance. In order to solve the present problem, it is required to understand the work responsibility in the quality surveillance correctly, to establish the effective working routine in the quality surveillance, and to conduct, the quality surveillance work. The object in the quality surveillance shall be 'the operator' who engaged in the examination/calibration directly in the laboratory, especially the personnel in training (who is engaged in the examination/calibration). The quality supervisors shall be fully authorized, so that they can correctly understand the work responsibility in quality surveillance, and are with the rights for 'full supervision'. The laboratory also shall arrange necessary training to the quality supervisor, so that they can obtain sufficient guide in time and are with required qualification or occupation prerequisites. (authors)

  17. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  18. Mes chers collègues, les moines, ou le partage de l’ignorance

    Directory of Open Access Journals (Sweden)

    Laurence Caillet

    2009-03-01

    Full Text Available Mes chers collègues, les moines, ou le partage de l’ignorance. Aucun statut ne m’a autant étonnée que celui de collègue qui me fut conféré par les moines du Grand monastère de l’Est, à Nara. Après avoir testé mes connaissances en matière de rituel, ces moines fort savants manifestèrent en effet, avec ostentation, leur ignorance. Pointant pour moi des détails liturgiques qu’ils tenaient pour incompréhensibles, ils prirent un plaisir évident à bavarder histoire et théologie, comme si je pouvais apporter quoi que ce soit. Cette mise en scène du caractère incompréhensible du rituel soulignait le caractère ineffable de cérémonies jadis accomplies au ciel par des entités supérieures. Je fournissais prétexte à décrire la vanité de l’érudition face à l’accomplissement des mystères et aussi l’importance de cette érudition pour renouer avec un sens originel irrémédiablement inconnaissable.My dear colleagues the monks, or the sharing of ignorance. No status has ever surprised me as much as that of “colleague” conferred on me by the monks of the Great Eastern Monastery of Nara. After testing my knowledge of ritual, these very learned monks made great show of their ignorance. Drawing my attention to liturgical details that they held to be incomprehensible, they took obvious pleasure in chatting about history and theology, as if I were capable of making the slightest contribution. This staging of the impenetrable nature of the ritual highlighted the ineffable character of the ceremonies performed in heaven long ago by superior beings. I provided a convenient pretext for describing the vanity of erudition in the face of the accomplishment of the mysteries, and also the importance of this erudition for renewing an original, irreparably unknowable meaning.

  19. Ignoring Memory Hints: The Stubborn Influence of Environmental Cues on Recognition Memory

    Science.gov (United States)

    Selmeczy, Diana; Dobbins, Ian G.

    2017-01-01

    Recognition judgments can benefit from the use of environmental cues that signal the general likelihood of encountering familiar versus unfamiliar stimuli. While incorporating such cues is often adaptive, there are circumstances (e.g., eyewitness testimony) in which observers should fully ignore environmental cues in order to preserve memory…

  20. Illiteracy, Ignorance, and Willingness to Quit Smoking among Villagers in India

    Science.gov (United States)

    Gorty, Prasad V. S. N. R.; Allam, Apparao

    1992-01-01

    During the field work to control oral cancer, difficulty in communication was encountered with illiterates. A study to define the role of illiteracy, ignorance and willingness to quit smoking among the villagers was undertaken in a rural area surrounding Doddipatla Village, A.P., India. Out of a total population of 3,550, 272 (7.7%) persons, mostly in the age range of 21–50 years, attended a cancer detection camp. There were 173 (63.6%) females and 99 (36.4%) males, among whom 66 (M53 + F13) were smokers; 36.4% of males and 63% of females were illiterate. Among the illiterates, it was observed that smoking rate was high (56%) and 47.7% were ignorant of health effects of smoking. The attitude of illiterate smokers was encouraging, as 83.6% were willing to quit smoking. Further research is necessary to design health education material for 413.5 million illiterates living in India (1991 Indian Census). A community health worker, trained in the use of mass media coupled with a person‐to‐person approach, may help the smoker to quit smoking. PMID:1506267

  1. Cross-modal selective attention: on the difficulty of ignoring sounds at the locus of visual attention.

    Science.gov (United States)

    Spence, C; Ranson, J; Driver, J

    2000-02-01

    In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual lip-read information consistent with the relevant auditory stream, or meaningless "chewing" lip movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking lips were fixated than when chewing lips were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual lipreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual lip movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a difficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The implications for cross-modal links in spatial attention are discussed.

  2. The trust game behind the veil of ignorance: A note on gender differences

    NARCIS (Netherlands)

    Vyrastekova, J.; Onderstal, S.

    2008-01-01

    We analyze gender differences in the trust game in a "behind the veil of ignorance" design. This method yields strategies that are consistent with actions observed in the classical trust game experiments. We observe that, on average, men and women do not differ in "trust", and that women are

  3. Ignoring the Obvious: Combined Arms and Fire and Maneuver Tactics Prior to World War I

    National Research Council Canada - National Science Library

    Bruno, Thomas

    2002-01-01

    The armies that entered WWI ignored many pre-war lessons though WWI armies later developed revolutionary tactical-level advances, scholars claim that this tactical evolution followed an earlier period...

  4. On the perpetuation of ignorance: system dependence, system justification, and the motivated avoidance of sociopolitical information.

    Science.gov (United States)

    Shepherd, Steven; Kay, Aaron C

    2012-02-01

    How do people cope when they feel uninformed or unable to understand important social issues, such as the environment, energy concerns, or the economy? Do they seek out information, or do they simply ignore the threatening issue at hand? One would intuitively expect that a lack of knowledge would motivate an increased, unbiased search for information, thereby facilitating participation and engagement in these issues-especially when they are consequential, pressing, and self-relevant. However, there appears to be a discrepancy between the importance/self-relevance of social issues and people's willingness to engage with and learn about them. Leveraging the literature on system justification theory (Jost & Banaji, 1994), the authors hypothesized that, rather than motivating an increased search for information, a lack of knowledge about a specific sociopolitical issue will (a) foster feelings of dependence on the government, which will (b) increase system justification and government trust, which will (c) increase desires to avoid learning about the relevant issue when information is negative or when information valence is unknown. In other words, the authors suggest that ignorance-as a function of the system justifying tendencies it may activate-may, ironically, breed more ignorance. In the contexts of energy, environmental, and economic issues, the authors present 5 studies that (a) provide evidence for this specific psychological chain (i.e., ignorance about an issue → dependence → government trust → avoidance of information about that issue); (b) shed light on the role of threat and motivation in driving the second and third links in this chain; and (c) illustrate the unfortunate consequences of this process for individual action in those contexts that may need it most.

  5. The Trust Game Behind the Veil of Ignorance : A Note on Gender Differences

    NARCIS (Netherlands)

    Vyrastekova, J.; Onderstal, A.M.

    2005-01-01

    We analyse gender differences in the trust game in a "behind the veil of ignorance" design.This method yields strategies that are consistent with actions observed in the classical trust game experiments.We observe that, on averge, men and women do not differ in "trust", and that women are slightly

  6. Geographies of knowing, geographies of ignorance: jumping scale in Southeast Asia

    OpenAIRE

    van Schendel, W.

    2002-01-01

    'Area studies' use a geographical metaphor to visualise and naturalise particular social spaces as well as a particular scale of analysis. They produce specific geographies of knowing but also create geographies of ignorance. Taking Southeast Asia as an example, in this paper I explore how areas are imagined and how area knowledge is structured to construct area 'heartlands' as well as area `borderlands'. This is illustrated by considering a large region of Asia (here named Zomiatf) that did ...

  7. The importance of ignoring: Alpha oscillations protect selectivity.

    Science.gov (United States)

    Payne, Lisa; Sekuler, Robert

    2014-06-01

    Selective attention is often thought to entail an enhancement of some task-relevant stimulus or attribute. We discuss the perspective that ignoring irrelevant, distracting information plays a complementary role in information processing. Cortical oscillations within the alpha (8-14 Hz) frequency band have emerged as a marker of sensory suppression. This suppression is linked to selective attention for visual, auditory, somatic, and verbal stimuli. Inhibiting processing of irrelevant input makes responses more accurate and timely. It also helps protect material held in short-term memory against disruption. Furthermore, this selective process keeps irrelevant information from distorting the fidelity of memories. Memory is only as good as the perceptual representations on which it is based, and on whose maintenance it depends. Modulation of alpha oscillations can be exploited as an active, purposeful mechanism to help people pay attention and remember the things that matter.

  8. Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew

    2011-01-01

    This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817

  9. The Capital Costs Conundrum: Why Are Capital Costs Ignored and What Are the Consequences?

    Science.gov (United States)

    Winston, Gordon C.

    1993-01-01

    Colleges and universities historically have ignored the capital costs associated with institutional administration in their estimates of overall and per-student costs. This neglect leads to distortion of data, misunderstandings, and uninformed decision making. The real costs should be recognized in institutional accounting. (MSE)

  10. Monitoring your friends, not your foes: strategic ignorance and the delegation of real authority

    NARCIS (Netherlands)

    Dominguez-Martinez, S.; Sloof, R.; von Siemens, F.

    2010-01-01

    In this laboratory experiment we study the use of strategic ignorance to delegate real authority within a firm. A worker can gather information on investment projects, while a manager makes the implementation decision. The manager can monitor the worker. This allows her to better exploit the

  11. Induced Compton-scattering effects in radiation-transport approximations

    International Nuclear Information System (INIS)

    Gibson, D.R. Jr.

    1982-02-01

    The method of characteristics is used to solve radiation transport problems with induced Compton scattering effects included. The methods used to date have only addressed problems in which either induced Compton scattering is ignored, or problems in which linear scattering is ignored. Also, problems which include both induced Compton scattering and spatial effects have not been considered previously. The introduction of induced scattering into the radiation transport equation results in a quadratic nonlinearity. Methods are developed to solve problems in which both linear and nonlinear Compton scattering are important. Solutions to scattering problems are found for a variety of initial photon energy distributions

  12. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  13. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  14. Intranasal oxytocin impedes the ability to ignore task-irrelevant facial expressions of sadness in students with depressive symptoms.

    Science.gov (United States)

    Ellenbogen, Mark A; Linnen, Anne-Marie; Cardoso, Christopher; Joober, Ridha

    2013-03-01

    The administration of oxytocin promotes prosocial behavior in humans. The mechanism by which this occurs is unknown, but it likely involves changes in social information processing. In a randomized placebo-controlled study, we examined the influence of intranasal oxytocin and placebo on the interference control component of inhibition (i.e. ability to ignore task-irrelevant information) in 102 participants using a negative affective priming task with sad, angry, and happy faces. In this task, participants are instructed to respond to a facial expression of emotion while simultaneously ignoring another emotional face. On the subsequent trial, the previously-ignored emotional valence may become the emotional valence of the target face. Inhibition is operationalized as the differential delay between responding to a previously-ignored emotional valence and responding to an emotional valence unrelated to the previous one. Although no main effect of drug administration on inhibition was observed, a drug × depressive symptom interaction (β = -0.25; t = -2.6, p < 0.05) predicted the inhibition of sad faces. Relative to placebo, participants with high depression scores who were administered oxytocin were unable to inhibit the processing of sad faces. There was no relationship between drug administration and inhibition among those with low depression scores. These findings are consistent with increasing evidence that oxytocin alters social information processing in ways that have both positive and negative social outcomes. Because elevated depression scores are associated with an increased risk for major depressive disorder, difficulties inhibiting mood-congruent stimuli following oxytocin administration may be associated with risk for depression. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Induced Compton scattering effects in radiation transport approximations

    International Nuclear Information System (INIS)

    Gibson, D.R. Jr.

    1982-01-01

    In this thesis the method of characteristics is used to solve radiation transport problems with induced Compton scattering effects included. The methods used to date have only addressed problems in which either induced Compton scattering is ignored, or problems in which linear scattering is ignored. Also, problems which include both induced Compton scattering and spatial effects have not been considered previously. The introduction of induced scattering into the radiation transport equation results in a quadratic nonlinearity. Methods are developed to solve problems in which both linear and nonlinear Compton scattering are important. Solutions to scattering problems are found for a variety of initial photon energy distributions

  16. Sarajevo: Politics and Cultures of Remembrance and Ignorance

    Directory of Open Access Journals (Sweden)

    Adla Isanović

    2017-10-01

    Full Text Available This text critically reflects on cultural events organized to mark the 100th anniversary of the start of the First World War in Sarajevo and Bosnia & Herzegovina. It elaborates on disputes which showed that culture is in the centre of identity politics and struggles (which can also take a fascist nationalist form, accept the colonizer’s perspective, etc., on how commemorations ‘swallowed’ the past and present, but primarily contextualizes, historicizes and politicizes Sarajevo 2014 and its politics of visibility. This case is approached as an example and symptomatic of the effects of the current state of capitalism, coloniality, racialization and subjugation, as central to Europe today. Article received: June 2, 2017; Article accepted: June 8, 2017; Published online: October 15, 2017; Original scholarly paper How to cite this article: Isanović, Adla. "Sarajevo: Politics and Cultures of Remembrance and Ignorance." AM Journal of Art and Media Studies 14 (2017: 133-144. doi: 10.25038/am.v0i14.199

  17. Monitored by your friends, not your foes: Strategic ignorance and the delegation of real authority

    NARCIS (Netherlands)

    Dominguez-Martinez, S.; Sloof, R.; von Siemens, F.

    2012-01-01

    In this laboratory experiment we study the use of strategic ignorance to delegate real authority within a firm. A worker can gather information on investment projects, while a manager makes the implementation decision. The manager can monitor the worker. This allows her to exploit any information

  18. Burden of Circulatory System Diseases and Ignored Barriers ofKnowledge Translation

    Directory of Open Access Journals (Sweden)

    Hamed-Basir Ghafouri

    2012-10-01

    Full Text Available Circulatory system disease raise third highest disability-adjusted life years among Iranians and ischemic cardiac diseases are main causes for such burden. Despite available evidences on risk factors of the disease, no effective intervention was implemented to control and prevent the disease. This paper non-systematically reviews available literature on the problem, solutions, and barriers of implementation of knowledge translation in Iran. It seems that there are ignored factors such as cultural and motivational issues in knowledge translation interventions but there are hopes for implementation of started projects and preparation of students as next generation of knowledge transferors.

  19. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    Science.gov (United States)

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  20. Approximate solutions for radial travel time and capture zone in unconfined aquifers.

    Science.gov (United States)

    Zhou, Yangxiao; Haitjema, Henk

    2012-01-01

    Radial time-of-travel (TOT) capture zones have been evaluated for unconfined aquifers with and without recharge. The solutions of travel time for unconfined aquifers are rather complex and have been replaced with much simpler approximate solutions without significant loss of accuracy in most practical cases. The current "volumetric method" for calculating the radius of a TOT capture zone assumes no recharge and a constant aquifer thickness. It was found that for unconfined aquifers without recharge, the volumetric method leads to a smaller and less protective wellhead protection zone when ignoring drawdowns. However, if the saturated thickness near the well is used in the volumetric method a larger more protective TOT capture zone is obtained. The same is true when the volumetric method is used in the presence of recharge. However, for that case it leads to unreasonableness over the prediction of a TOT capture zone of 5 years or more. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  1. Ignorance of electrosurgery among obstetricians and gynaecologists.

    Science.gov (United States)

    Mayooran, Zorana; Pearce, Scott; Tsaltas, Jim; Rombauts, Luk; Brown, T Ian H; Lawrence, Anthony S; Fraser, Kym; Healy, David L

    2004-12-01

    The purpose of this study was to assess the level of skill of laparoscopic surgeons in electrosurgery. Subjects were asked to complete a practical diathermy station and a written test of electrosurgical knowledge. Tests were held in teaching and non-teaching hospitals. Twenty specialists in obstetrics and gynaecology were randomly selected and tested on the Monash University gynaecological laparoscopic pelvi-trainer. Twelve candidates were consultants with 9-28 years of practice in operative laparoscopy, and 8 were registrars with up to six years of practice in operative laparoscopy. Seven consultants and one registrar were from rural Australia, and three consultants were from New Zealand. Candidates were marked with checklist criteria resulting in a pass/fail score, as well as a weighted scoring system. We retested 11 candidates one year later with the same stations. No improvement in electrosurgery skill in one year of obstetric and gynaecological practice. No candidate successfully completed the written electrosurgery station in the initial test. A slight improvement in the pass rate to 18% was observed in the second test. The pass rate of the diathermy station dropped from 50% to 36% in the second test. The study found ignorance of electrosurgery/diathermy among gynaecological surgeons. One year later, skills were no better.

  2. Hooking up: Gender Differences, Evolution, and Pluralistic Ignorance

    Directory of Open Access Journals (Sweden)

    Chris Reiber

    2010-07-01

    Full Text Available “Hooking-up” – engaging in no-strings-attached sexual behaviors with uncommitted partners - has become a norm on college campuses, and raises the potential for disease, unintended pregnancy, and physical and psychological trauma. The primacy of sex in the evolutionary process suggests that predictions derived from evolutionary theory may be a useful first step toward understanding these contemporary behaviors. This study assessed the hook-up behaviors and attitudes of 507 college students. As predicted by behavioral-evolutionary theory: men were more comfortable than women with all types of sexual behaviors; women correctly attributed higher comfort levels to men, but overestimated men's actual comfort levels; and men correctly attributed lower comfort levels to women, but still overestimated women's actual comfort levels. Both genders attributed higher comfort levels to same-gendered others, reinforcing a pluralistic ignorance effect that might contribute to the high frequency of hook-up behaviors in spite of the low comfort levels reported and suggesting that hooking up may be a modern form of intrasexual competition between females for potential mates.

  3. New Tests of the Fixed Hotspot Approximation

    Science.gov (United States)

    Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.

    2005-05-01

    We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of

  4. Research on injury compensation and health outcomes: ignoring the problem of reverse causality led to a biased conclusion.

    Science.gov (United States)

    Spearing, Natalie M; Connelly, Luke B; Nghiem, Hong S; Pobereskin, Louis

    2012-11-01

    This study highlights the serious consequences of ignoring reverse causality bias in studies on compensation-related factors and health outcomes and demonstrates a technique for resolving this problem of observational data. Data from an English longitudinal study on factors, including claims for compensation, associated with recovery from neck pain (whiplash) after rear-end collisions are used to demonstrate the potential for reverse causality bias. Although it is commonly believed that claiming compensation leads to worse recovery, it is also possible that poor recovery may lead to compensation claims--a point that is seldom considered and never addressed empirically. This pedagogical study compares the association between compensation claiming and recovery when reverse causality bias is ignored and when it is addressed, controlling for the same observable factors. When reverse causality is ignored, claimants appear to have a worse recovery than nonclaimants; however, when reverse causality bias is addressed, claiming compensation appears to have a beneficial effect on recovery, ceteris paribus. To avert biased policy and judicial decisions that might inadvertently disadvantage people with compensable injuries, there is an urgent need for researchers to address reverse causality bias in studies on compensation-related factors and health. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Double jeopardy, the equal value of lives and the veil of ignorance: a rejoinder to Harris.

    Science.gov (United States)

    McKie, J; Kuhse, H; Richardson, J; Singer, P

    1996-08-01

    Harris levels two main criticisms against our original defence of QALYs (Quality Adjusted Life Years). First, he rejects the assumption implicit in the QALY approach that not all lives are of equal value. Second, he rejects our appeal to Rawls's veil of ignorance test in support of the QALY method. In the present article we defend QALYs against Harris's criticisms. We argue that some of the conclusions Harris draws from our view that resources should be allocated on the basis of potential improvements in quality of life and quantity of life are erroneous, and that others lack the moral implications Harris claims for them. On the other hand, we defend our claim that a rational egoist, behind a veil of ignorance, could consistently choose to allocate life-saving resources in accordance with the QALY method, despite Harris's claim that a rational egoist would allocate randomly if there is no better than a 50% chance of being the recipient.

  6. Uncertain Climate Forecasts From Multimodel Ensembles: When to Use Them and When to Ignore Them

    OpenAIRE

    Jewson, Stephen; Rowlands, Dan

    2010-01-01

    Uncertainty around multimodel ensemble forecasts of changes in future climate reduces the accuracy of those forecasts. For very uncertain forecasts this effect may mean that the forecasts should not be used. We investigate the use of the well-known Bayesian Information Criterion (BIC) to make the decision as to whether a forecast should be used or ignored.

  7. Ignoring versus updating in working memory reveal differential roles of attention and feature binding

    OpenAIRE

    Fallon, SJ; Mattiesing, RM; Dolfen, N; Manohar, SGM; Husain, M

    2017-01-01

    Ignoring distracting information and updating current contents are essential components of working memory (WM). Yet, although both require controlling irrelevant information, it is unclear whether they have the same effects on recall and produce the same level of misbinding errors (incorrectly joining the features of different memoranda). Moreover, the likelihood of misbinding may be affected by the feature similarity between the items already encoded into memory and the information that has ...

  8. Food Quality Certificates and Research on Effect of Food Quality Certificates to Determinate Ignored Level of Buying Behavioral: A Case Study in Hitit University Feas Business Department

    Directory of Open Access Journals (Sweden)

    Hulya CAGIRAN KENDIRLI

    2014-12-01

    According to result of research, there is no relationship between demographic specialties of students and ignored of food and quality legislation. But there is relationship between sexuality and ignored of food and quality legislation.

  9. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  10. Excitatory and inhibitory priming by attended and ignored non-recycled words with monolinguals and bilinguals.

    Science.gov (United States)

    Neumann, Ewald; Nkrumah, Ivy K; Chen, Zhe

    2018-03-03

    Experiments examining identity priming from attended and ignored novel words (words that are used only once except when repetition is required due to experimental manipulation) in a lexical decision task are reported. Experiment 1 tested English monolinguals whereas Experiment 2 tested Twi (a native language of Ghana, Africa)-English bilinguals. Participants were presented with sequential pairs of stimuli composed of a prime followed by a probe, with each containing two items. The participants were required to name the target word in the prime display, and to make a lexical decision to the target item in the probe display. On attended repetition (AR) trials the probe target item was identical to the target word on the preceding attentional display. On ignored repetition (IR) trials the probe target item was the same as the distractor word in the preceding attentional display. The experiments produced facilitated (positive) priming in the AR trials and delayed (negative) priming in the IR trials. Significantly, the positive and negative priming effects also replicated across both monolingual and bilingual groups of participants, despite the fact that the bilinguals were responding to the task in their non-dominant language.

  11. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  12. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  13. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    Science.gov (United States)

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can

  14. The Insider Threat to Cybersecurity: How Group Process and Ignorance Affect Analyst Accuracy and Promptitude

    Science.gov (United States)

    2017-09-01

    McCarthy, J. (1980). Circumscription - A Form of Nonmonotonic Reasoning. Artificial Intelligence , 13, 27–39. McClure, S., Scambray, J., & Kurtz, G. (2012...THREAT TO CYBERSECURITY : HOW GROUP PROCESS AND IGNORANCE AFFECT ANALYST ACCURACY AND PROMPTITUDE by Ryan F. Kelly September 2017...September 2017 3. REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE THE INSIDER THREAT TO CYBERSECURITY : HOW GROUP PROCESS AND

  15. Aspiring to Spectral Ignorance in Earth Observation

    Science.gov (United States)

    Oliver, S. A.

    2016-12-01

    Enabling robust, defensible and integrated decision making in the Era of Big Earth Data requires the fusion of data from multiple and diverse sensor platforms and networks. While the application of standardised global grid systems provides a common spatial analytics framework that facilitates the computationally efficient and statistically valid integration and analysis of these various data sources across multiple scales, there remains the challenge of sensor equivalency; particularly when combining data from different earth observation satellite sensors (e.g. combining Landsat and Sentinel-2 observations). To realise the vision of a sensor ignorant analytics platform for earth observation we require automation of spectral matching across the available sensors. Ultimately, the aim is to remove the requirement for the user to possess any sensor knowledge in order to undertake analysis. This paper introduces the concept of spectral equivalence and proposes a methodology through which equivalent bands may be sourced from a set of potential target sensors through application of equivalence metrics and thresholds. A number of parameters can be used to determine whether a pair of spectra are equivalent for the purposes of analysis. A baseline set of thresholds for these parameters and how to apply them systematically to enable relation of spectral bands amongst numerous different sensors is proposed. The base unit for comparison in this work is the relative spectral response. From this input, determination of a what may constitute equivalence can be related by a user, based on their own conceptualisation of equivalence.

  16. Sophisticated Approval Voting, Ignorance Priors, and Plurality Heuristics: A Behavioral Social Choice Analysis in a Thurstonian Framework

    Science.gov (United States)

    Regenwetter, Michel; Ho, Moon-Ho R.; Tsetlin, Ilia

    2007-01-01

    This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two…

  17. On Moderator Detection in Anchoring Research: Implications of Ignoring Estimate Direction

    Directory of Open Access Journals (Sweden)

    Nathan N. Cheek

    2018-05-01

    Full Text Available Anchoring, whereby judgments assimilate to previously considered standards, is one of the most reliable effects in psychology. In the last decade, researchers have become increasingly interested in identifying moderators of anchoring effects. We argue that a drawback of traditional moderator analyses in the standard anchoring paradigm is that they ignore estimate direction—whether participants’ estimates are higher or lower than the anchor value. We suggest that failing to consider estimate direction can sometimes obscure moderation in anchoring tasks, and discuss three potential analytic solutions that take estimate direction into account. Understanding moderators of anchoring effects is essential for a basic understanding of anchoring and for applied research on reducing the influence of anchoring in real-world judgments. Considering estimate direction reduces the risk of failing to detect moderation.

  18. Effects of ignoring baseline on modeling transitions from intact cognition to dementia.

    Science.gov (United States)

    Yu, Lei; Tyas, Suzanne L; Snowdon, David A; Kryscio, Richard J

    2009-07-01

    This paper evaluates the effect of ignoring baseline when modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. Transitions among states are modeled by a discrete-time Markov chain having three transient (intact cognition, MCI, and GI) and two competing absorbing states (death and dementia). Transition probabilities depend on two covariates, age and the presence/absence of an apolipoprotein E-epsilon4 allele, through a multinomial logistic model with shared random effects. Results are illustrated with an application to the Nun Study, a cohort of 678 participants 75+ years of age at baseline and followed longitudinally with up to ten cognitive assessments per nun.

  19. Phonological processing of ignored distractor pictures, an fMRI investigation.

    Science.gov (United States)

    Bles, Mart; Jansma, Bernadette M

    2008-02-11

    Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction), or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures). Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.

  20. Settlers Unsettled: Using Field Schools and Digital Stories to Transform Geographies of Ignorance about Indigenous Peoples in Canada

    Science.gov (United States)

    Castleden, Heather; Daley, Kiley; Sloan Morgan, Vanessa; Sylvestre, Paul

    2013-01-01

    Geography is a product of colonial processes, and in Canada, the exclusion from educational curricula of Indigenous worldviews and their lived realities has produced "geographies of ignorance". Transformative learning is an approach geographers can use to initiate changes in non-Indigenous student attitudes about Indigenous…

  1. Limitations of the paraxial Debye approximation.

    Science.gov (United States)

    Sheppard, Colin J R

    2013-04-01

    In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.

  2. Approximate symmetries of Hamiltonians

    Science.gov (United States)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  3. Learning to Ignore: A Modeling Study of a Decremental Cholinergic Pathway and Its Influence on Attention and Learning

    Science.gov (United States)

    Oros, Nicolas; Chiba, Andrea A.; Nitz, Douglas A.; Krichmar, Jeffrey L.

    2014-01-01

    Learning to ignore irrelevant stimuli is essential to achieving efficient and fluid attention, and serves as the complement to increasing attention to relevant stimuli. The different cholinergic (ACh) subsystems within the basal forebrain regulate attention in distinct but complementary ways. ACh projections from the substantia innominata/nucleus…

  4. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  5. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  6. 3-D numerical investigation of subsurface flow in anisotropic porous media using multipoint flux approximation method

    KAUST Repository

    Negara, Ardiansyah

    2013-01-01

    Anisotropy of hydraulic properties of subsurface geologic formations is an essential feature that has been established as a consequence of the different geologic processes that they undergo during the longer geologic time scale. With respect to petroleum reservoirs, in many cases, anisotropy plays significant role in dictating the direction of flow that becomes no longer dependent only on the pressure gradient direction but also on the principal directions of anisotropy. Furthermore, in complex systems involving the flow of multiphase fluids in which the gravity and the capillarity play an important role, anisotropy can also have important influences. Therefore, there has been great deal of motivation to consider anisotropy when solving the governing conservation laws numerically. Unfortunately, the two-point flux approximation of finite difference approach is not capable of handling full tensor permeability fields. Lately, however, it has been possible to adapt the multipoint flux approximation that can handle anisotropy to the framework of finite difference schemes. In multipoint flux approximation method, the stencil of approximation is more involved, i.e., it requires the involvement of 9-point stencil for the 2-D model and 27-point stencil for the 3-D model. This is apparently challenging and cumbersome when making the global system of equations. In this work, we apply the equation-type approach, which is the experimenting pressure field approach that enables the solution of the global problem breaks into the solution of multitude of local problems that significantly reduce the complexity without affecting the accuracy of numerical solution. This approach also leads in reducing the computational cost during the simulation. We have applied this technique to a variety of anisotropy scenarios of 3-D subsurface flow problems and the numerical results demonstrate that the experimenting pressure field technique fits very well with the multipoint flux approximation

  7. T cell ignorance is bliss: T cells are not tolerized by Langerhans cells presenting human papillomavirus antigens in the absence of costimulation

    Directory of Open Access Journals (Sweden)

    Andrew W. Woodham

    2016-12-01

    Full Text Available Human papillomavirus type 16 (HPV16 infections are intra-epithelial, and thus, HPV16 is known to interact with Langerhans cells (LCs, the resident epithelial antigen-presenting cells (APCs. The current paradigm for APC-mediated induction of T cell anergy is through delivery of T cell receptor signals via peptides on MHC molecules (signal 1, but without costimulation (signal 2. We previously demonstrated that LCs exposed to HPV16 in vitro present HPV antigens to T cells without costimulation, but it remained uncertain if such T cells would remain ignorant, become anergic, or in the case of CD4+ T cells, differentiate into Tregs. Here we demonstrate that Tregs were not induced by LCs presenting only signal 1, and through a series of in vitro immunizations show that CD8+ T cells receiving signal 1+2 from LCs weeks after consistently receiving signal 1 are capable of robust effector functions. Importantly, this indicates that T cells are not tolerized but instead remain ignorant to HPV, and are activated given the proper signals. Keywords: T cell anergy, T cell ignorance, Immune tolerance, Human papillomavirus, HPV16, Langerhans cells

  8. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  9. The effects of methylphenidate on prepulse inhibition during attended and ignored prestimuli among boys with attention-deficit hyperactivity disorder.

    Science.gov (United States)

    Hawk, Larry W; Yartz, Andrew R; Pelham, William E; Lock, Thomas M

    2003-01-01

    The present study investigated attentional modification of prepulse inhibition of startle among boys with and without attention-deficit hyperactivity disorder (ADHD). Two hypotheses were tested: (1) whether ADHD is associated with diminished prepulse inhibition during attended prestimuli, but not ignored prestimuli, and (2) whether methylphenidate selectively increases prepulse inhibition to attended prestimuli among boys with ADHD. Participants were 17 boys with ADHD and 14 controls. Participants completed a tone discrimination task in each of two sessions separated by 1 week. ADHD boys were administered methylphenidate (0.3 mg/kg) in one session and placebo in the other session in a randomized, double-blind fashion. During each series of 72 tones (75 dB; half 1200-Hz, half 400-Hz), participants were paid to attend to one pitch and ignore the other. Bilateral eyeblink electromyogram startle responses were recorded in response to acoustic probes (50-ms, 102-dB white noise) presented following the onset of two-thirds of tones, and during one-third of intertrial intervals. Relative to controls, boys with ADHD exhibited diminished prepulse inhibition 120 ms after onset of attended but not ignored prestimuli following placebo administration. Methylphenidate selectively increased prepulse inhibition to attended prestimuli at 120 ms among boys with ADHD to a level comparable to that of controls, who did not receive methylphenidate. These data are consistent with the hypothesis that ADHD involves diminished selective attention and suggest that methylphenidate ameliorates the symptoms of ADHD, at least in part, by altering an early attentional mechanism.

  10. General Rytov approximation.

    Science.gov (United States)

    Potvin, Guy

    2015-10-01

    We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.

  11. The wisdom of ignorant crowds: Predicting sport outcomes by mere recognition

    Directory of Open Access Journals (Sweden)

    Stefan M. Herzog

    2011-02-01

    Full Text Available that bets on the fact that people's recognition knowledge of names is a proxy for their competitiveness: In sports, it predicts that the better-known team or player wins a game. We present two studies on the predictive power of recognition in forecasting soccer games (World Cup 2006 and UEFA Euro 2008 and analyze previously published results. The performance of the collective recognition heuristic is compared to two benchmarks: predictions based on official rankings and aggregated betting odds. Across three soccer and two tennis tournaments, the predictions based on recognition performed similar to those based on rankings; when compared with betting odds, the heuristic fared reasonably well. Forecasts based on rankings---but not on betting odds---were improved by incorporating collective recognition information. We discuss the use of recognition for forecasting in sports and conclude that aggregating across individual ignorance spawns collective wisdom.

  12. Phonological processing of ignored distractor pictures, an fMRI investigation

    Directory of Open Access Journals (Sweden)

    Bles Mart

    2008-02-01

    Full Text Available Abstract Background Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction, or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures. Results Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Conclusion Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.

  13. The Ignorant Environmental Education Teacher: Students Get Empowered and Teach Philosophy of Nature Inspired by Ancient Greek Philosophy

    Science.gov (United States)

    Tsevreni, Irida

    2018-01-01

    This paper presents an attempt to apply Jacques Rancière's emancipatory pedagogy of "the ignorant schoolmaster" to environmental education, which emphasises environmental ethics. The paper tells the story of a philosophy of nature project in the framework of an environmental adult education course at a Second Chance School in Greece,…

  14. From Ignoring to Leading Changes – What Role do Universities Play in Developing Countries? (Case of Croatia

    Directory of Open Access Journals (Sweden)

    Slavica Singer

    2010-12-01

    Full Text Available Using the model of entrepreneurial university, the paper presents major blockages (university’s own institutional rigidity, fragmented organization, lack of mutual trust between the business sector and universities, no real benchmarks, legal framework not supportive of opening the university to new initiatives in Triple Helix interactions in Croatia. Comparing identified blockages with expectations (multidimensional campus, cooperation with the business sector and other stakeholders in designing new educational and research programs expressed by HEIs in developed countries around the world (2008 EIU survey indicates new challenges for universities in developing countries. With Triple Helix approach, not confined within national borders, but as an international networking opportunity, these challenges can be seen as opportunities, otherwise they are threats. On the scale of ignoring, observing, participating and leading positive changes in its surroundings, for the purpose of measuring vitality of Triple Helix interactions, Croatian universities are located more between ignoring and observing position. To move them towards a leading position, coordinated and consistent policies are needed in order to focus on eliminating identified blockages. Universities should take the lead in this process; otherwise they are losing credibility as desired partners in developing space for Triple Helix interactions.

  15. Technology trends in econometric energy models: Ignorance or information?

    International Nuclear Information System (INIS)

    Boyd, G.; Kokkelenberg, E.; State Univ., of New York, Binghamton, NY; Ross, M.; Michigan Univ., Ann Arbor, MI

    1991-01-01

    Simple time trend variables in factor demand models can be statistically powerful variables, but may tell the researcher very little. Even more complex specification of technical change, e.g. factor biased, are still the economentrician's ''measure of ignorance'' about the shifts that occur in the underlying production process. Furthermore, in periods of rapid technology change the parameters based on time trends may be too large for long run forecasting. When there is clearly identifiable engineering information about new technology adoption that changes the factor input mix, data for the technology adoption may be included in the traditional factor demand model to economically model specific factor biased technical change and econometrically test their contribution. The adoption of thermomechanical pulping (TMP) and electric are furnaces (EAF) are two electricity intensive technology trends in the Paper and Steel industries, respectively. This paper presents the results of including these variables in a tradition econometric factor demand model, which is based on the Generalized Leontief. The coefficients obtained for this ''engineering based'' technical change compares quite favorably to engineering estimates of the impact of TMP and EAF on electricity intensities, improves the estimates of the other price coefficients, and yields a more believable long run electricity forecast. 6 refs., 1 fig

  16. Global sea-level rise is recognised, but flooding from anthropogenic land subsidence is ignored around northern Manila Bay, Philippines.

    Science.gov (United States)

    Rodolfo, Kelvin S; Siringan, Fernando P

    2006-03-01

    Land subsidence resulting from excessive extraction of groundwater is particularly acute in East Asian countries. Some Philippine government sectors have begun to recognise that the sea-level rise of one to three millimetres per year due to global warming is a cause of worsening floods around Manila Bay, but are oblivious to, or ignore, the principal reason: excessive groundwater extraction is lowering the land surface by several centimetres to more than a decimetre per year. Such ignorance allows the government to treat flooding as a lesser problem that can be mitigated through large infrastructural projects that are both ineffective and vulnerable to corruption. Money would be better spent on preventing the subsidence by reducing groundwater pumping and moderating population growth and land use, but these approaches are politically and psychologically unacceptable. Even if groundwater use is greatly reduced and enlightened land-use practices are initiated, natural deltaic subsidence and global sea-level rise will continue to aggravate flooding, although at substantially lower rates.

  17. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  18. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  19. Allocating health care: cost-utility analysis, informed democratic decision making, or the veil of ignorance?

    Science.gov (United States)

    Goold, S D

    1996-01-01

    Assuming that rationing health care is unavoidable, and that it requires moral reasoning, how should we allocate limited health care resources? This question is difficult because our pluralistic, liberal society has no consensus on a conception of distributive justice. In this article I focus on an alternative: Who shall decide how to ration health care, and how shall this be done to respect autonomy, pluralism, liberalism, and fairness? I explore three processes for making rationing decisions: cost-utility analysis, informed democratic decision making, and applications of the veil of ignorance. I evaluate these processes as examples of procedural justice, assuming that there is no outcome considered the most just. I use consent as a criterion to judge competing processes so that rationing decisions are, to some extent, self-imposed. I also examine the processes' feasibility in our current health care system. Cost-utility analysis does not meet criteria for actual or presumed consent, even if costs and health-related utility could be measured perfectly. Existing structures of government cannot creditably assimilate the information required for sound rationing decisions, and grassroots efforts are not representative. Applications of the veil of ignorance are more useful for identifying principles relevant to health care rationing than for making concrete rationing decisions. I outline a process of decision making, specifically for health care, that relies on substantive, selected representation, respects pluralism, liberalism, and deliberative democracy, and could be implemented at the community or organizational level.

  20. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  1. Spin masters how the media ignored the real news and helped reelect Barack Obama

    CERN Document Server

    Freddoso, David

    2013-01-01

    The biggest story of the election was how the media ignored the biggest story of the election.Amid all the breathless coverage of a non-existent War on Women, there was little or no coverage of Obama's war on the economy?how, for instance, part-time work is replacing full-time work; how low-wage jobs are replacing high-wage ones; how for Americans between the ages of 25 and 54 there are fewer jobs today than there were when the recession officially ended in 2009, and fewer, in fact, than at any time since mid-1997.The downsizing of the American economy wasn't the only stor

  2. Managing uncertainty, ambiguity and ignorance in impact assessment by embedding evolutionary resilience, participatory modelling and adaptive management.

    Science.gov (United States)

    Bond, Alan; Morrison-Saunders, Angus; Gunn, Jill A E; Pope, Jenny; Retief, Francois

    2015-03-15

    In the context of continuing uncertainty, ambiguity and ignorance in impact assessment (IA) prediction, the case is made that existing IA processes are based on false 'normal' assumptions that science can solve problems and transfer knowledge into policy. Instead, a 'post-normal science' approach is needed that acknowledges the limits of current levels of scientific understanding. We argue that this can be achieved through embedding evolutionary resilience into IA; using participatory workshops; and emphasising adaptive management. The goal is an IA process capable of informing policy choices in the face of uncertain influences acting on socio-ecological systems. We propose a specific set of process steps to operationalise this post-normal science approach which draws on work undertaken by the Resilience Alliance. This process differs significantly from current models of IA, as it has a far greater focus on avoidance of, or adaptation to (through incorporating adaptive management subsequent to decisions), unwanted future scenarios rather than a focus on the identification of the implications of a single preferred vision. Implementing such a process would represent a culture change in IA practice as a lack of knowledge is assumed and explicit, and forms the basis of future planning activity, rather than being ignored. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    Science.gov (United States)

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  4. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  5. In the Casino of Life: Betting on Risks and Ignoring the Consequences of Climate Change and Hazards

    Science.gov (United States)

    Brosnan, D. M.

    2016-12-01

    Even faced with strong scientific evidence decision-makers cite uncertainty and delay actions. Scientists, confident in the quality of their science and acknowledging that uncertainty while present is low by scientific standards, become more frustrated as their information is ignored. Decreasing scientific uncertainty, a hallmark of long term studies e.g. IPCC reports does little to motivate decision-makers. Imperviousness to scientific data is prevalent across all scales. Municipalities prefer to spend millions of dollars on engineered solutions to climate change and hazards, even if science shows that they perform less well than nature-based ones and cost much more. California is known to be at risk from tsunamis generated by earthquakes off Alaska. A study using a 9.1 earthquake, similar to a 1965 event, calculated the immediate economic price tag in infrastructure loss and business interruption at 9.5billion. The exposure of Los Angeles/Long Beach port trade to damage and downtime exceeds 1.2billion; business interruption would triple the figure. Yet despite several excellent scientific studies, the State is ill prepared; investments in infrastructure commerce and conservation risk being literally washed away. Globally there is a 5-10% probability of an extreme geohazard, e.g, a Tambora like eruption, occurring in this century. With a "value of statistical life" of 2.2 million and population at 7 billion the risk for fatalities alone is 1.1-7billion per yr. But there is little interest in investing the $0.5-3.5 billion per year in volcano monitoring necessary to reduce fatalities and lower risks of global conflict, starvation, and societal destruction. More science and less uncertainty is clearly not the driver of action. But is speaking with certainty really the answer? Decision makers and scientists are in the same casino of life but rarely play at the same tables. Decision makers bet differently to scientists. To motivate action we need to be cognizant of

  6. I Want to but I Won't: Pluralistic Ignorance Inhibits Intentions to Take Paternity Leave in Japan

    Directory of Open Access Journals (Sweden)

    Takeru Miyajima

    2017-09-01

    Full Text Available The number of male employees who take paternity leave in Japan has been low in past decades. However, the majority of male employees actually wish to take paternity leave if they were to have a child. Previous studies have demonstrated that the organizational climate in workplaces is the major determinant of male employees' use of family-friendly policies, because males are often stigmatized and fear receiving negative evaluation from others. While such normative pressure might be derived from prevailing social practices relevant to people's expectation of social roles (e.g., “Men make houses, women make homes”, these social practices are often perpetuated even after the majority of group members have ceased to support them. The perpetuation of this unpopular norm could be caused by the social psychological phenomenon of pluralistic ignorance. While researches have explored people's beliefs about gender roles from various perspectives, profound understanding of these beliefs regarding gender role norms, and the accuracy of others' beliefs remains to be attained. The current research examined the association between pluralistic ignorance and the perpetually low rates of taking paternity leave in Japan. Specifically, Study 1 (n = 299 examined Japanese male employees' (ages ranging from the 20 s to the 40 s attitudes toward paternity leave and to estimate attitudes of other men of the same age, as well as behavioral intentions (i.e., desire and willingness to take paternity leave if they had a child in the future. The results demonstrated that male employees overestimated other men's negative attitudes toward paternity leave. Moreover, those who had positive attitudes toward taking leave and attributed negative attitudes to others were less willing to take paternity leave than were those who had positive attitudes and believed others shared those attitudes, although there was no significant difference between their desires to take paternity

  7. I Want to but I Won't: Pluralistic Ignorance Inhibits Intentions to Take Paternity Leave in Japan.

    Science.gov (United States)

    Miyajima, Takeru; Yamaguchi, Hiroyuki

    2017-01-01

    The number of male employees who take paternity leave in Japan has been low in past decades. However, the majority of male employees actually wish to take paternity leave if they were to have a child. Previous studies have demonstrated that the organizational climate in workplaces is the major determinant of male employees' use of family-friendly policies, because males are often stigmatized and fear receiving negative evaluation from others. While such normative pressure might be derived from prevailing social practices relevant to people's expectation of social roles (e.g., "Men make houses, women make homes"), these social practices are often perpetuated even after the majority of group members have ceased to support them. The perpetuation of this unpopular norm could be caused by the social psychological phenomenon of pluralistic ignorance. While researches have explored people's beliefs about gender roles from various perspectives, profound understanding of these beliefs regarding gender role norms, and the accuracy of others' beliefs remains to be attained. The current research examined the association between pluralistic ignorance and the perpetually low rates of taking paternity leave in Japan. Specifically, Study 1 ( n = 299) examined Japanese male employees' (ages ranging from the 20 s to the 40 s) attitudes toward paternity leave and to estimate attitudes of other men of the same age, as well as behavioral intentions (i.e., desire and willingness) to take paternity leave if they had a child in the future. The results demonstrated that male employees overestimated other men's negative attitudes toward paternity leave. Moreover, those who had positive attitudes toward taking leave and attributed negative attitudes to others were less willing to take paternity leave than were those who had positive attitudes and believed others shared those attitudes, although there was no significant difference between their desires to take paternity leave. Study 2 ( n

  8. Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...

  9. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Approximate cohomology in Banach algebras | Pourabbas ...

    African Journals Online (AJOL)

    We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...

  11. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  12. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  13. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  14. Approximations of Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Vinai K. Singh

    2013-03-01

    Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions

  15. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  16. Cosmological applications of Padé approximant

    International Nuclear Information System (INIS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation

  17. Cosmological applications of Padé approximant

    Science.gov (United States)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  18. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  19. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  20. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  1. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  2. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  3. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....

  4. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    Pedersen, Steffen Højris

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...

  5. Bounded-Degree Approximations of Stochastic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.

  6. Tidal Evolution of Asteroidal Binaries. Ruled by Viscosity. Ignorant of Rigidity

    OpenAIRE

    Efroimsky, Michael

    2015-01-01

    The rate of tidal evolution of asteroidal binaries is defined by the dynamical Love numbers divided by quality factors. Common is the (often illegitimate) approximation of the dynamical Love numbers with their static counterparts. As the static Love numbers are, approximately, proportional to the inverse rigidity, this renders a popular fallacy that the tidal evolution rate is determined by the product of the rigidity by the quality factor: $\\,k_l/Q\\propto 1/(\\mu Q)\\,$. In reality, the dynami...

  7. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  8. Limitations of shallow nets approximation.

    Science.gov (United States)

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Approximate circuits for increased reliability

    Science.gov (United States)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  10. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey; Alkhalifah, Tariq Ali

    2013-01-01

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  11. Analytical approximation of neutron physics data

    International Nuclear Information System (INIS)

    Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.

    1984-01-01

    The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy

  12. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  13. Nuclear Hartree-Fock approximation testing and other related approximations

    International Nuclear Information System (INIS)

    Cohenca, J.M.

    1970-01-01

    Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt

  14. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  15. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  16. Should we ignore U-235 series contribution to dose?

    International Nuclear Information System (INIS)

    Beaugelin-Seiller, Karine; Goulet, Richard; Mihok, Steve; Beresford, Nicholas A.

    2016-01-01

    Environmental Risk Assessment (ERA) methodology for radioactive substances is an important regulatory tool for assessing the safety of licensed nuclear facilities for wildlife, and the environment as a whole. ERAs are therefore expected to be both fit for purpose and conservative. When uranium isotopes are assessed, there are many radioactive decay products which could be considered. However, risk assessors usually assume 235 U and its daughters contribute negligibly to radiological dose. The validity of this assumption has not been tested: what might the 235 U family contribution be and how does the estimate depend on the assumptions applied? In this paper we address this question by considering aquatic wildlife in Canadian lakes exposed to historic uranium mining practices. A full theoretical approach was used, in parallel to a more realistic assessment based on measurements of several elements of the U decay chains. The 235 U family contribution varied between about 4% and 75% of the total dose rate depending on the assumptions of the equilibrium state of the decay chains. Hence, ignoring the 235 U series will not result in conservative dose assessments for wildlife. These arguments provide a strong case for more in situ measurements of the important members of the 235 U chain and for its consideration in dose assessments. - Highlights: • Realistic ecological risk assessment infers a complete inventory of radionuclides. • U-235 family may not be minor when assessing total dose rates experienced by biota. • There is a need to investigate the real state of equilibrium decay of U chains. • There is a need to improve the capacity to measure all elements of the U decay chains.

  17. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...

  18. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  19. IGNORING CHILDREN'S BEDTIME CRYING: THE POWER OF WESTERN-ORIENTED BELIEFS.

    Science.gov (United States)

    Maute, Monique; Perren, Sonja

    2018-03-01

    Ignoring children's bedtime crying (ICBC) is an issue that polarizes parents as well as pediatricians. While most studies have focused on the effectiveness of sleep interventions, no study has yet questioned which parents use ICBC. Parents often find children's sleep difficulties to be very challenging, but factors such as the influence of Western approaches to infant care, stress, and sensitivity have not been analyzed in terms of ICBC. A sample of 586 parents completed a questionnaire to investigate the relationships between parental factors and the method of ICBC. Data were analyzed using structural equation modeling. Latent variables were used to measure parental stress (Parental Stress Scale; J.O. Berry & W.H. Jones, 1995), sensitivity (Situation-Reaction-Questionnaire; Y. Hänggi, K. Schweinberger, N. Gugger, & M. Perrez, 2010), Western-oriented parental beliefs (Rigidity), and children's temperament (Parenting Stress Index; H. Tröster & R.R. Abidin). ICBC was used by 32.6% (n = 191) of parents in this study. Parents' Western-oriented beliefs predicted ICBC. Attitudes such as feeding a child on a time schedule and not carrying it out to prevent dependence were associated with letting the child cry to fall asleep. Low-sensitivity parents as well as parents of children with a difficult temperament used ICBC more frequently. Path analysis shows that parental stress did not predict ICBC. The results suggest that ICBC has become part of Western childrearing tradition. © 2018 Michigan Association for Infant Mental Health.

  20. Approximate Simulation of Acute Hypobaric Hypoxia with Normobaric Hypoxia

    Science.gov (United States)

    Conkin, J.; Wessel, J. H., III

    2011-01-01

    INTRODUCTION. Some manufacturers of reduced oxygen (O2) breathing devices claim a comparable hypobaric hypoxia (HH) training experience by providing F(sub I) O2 pO2) of the target altitude. METHODS. Literature from investigators and manufacturers indicate that these devices may not properly account for the 47 mmHg of water vapor partial pressure that reduces the inspired partial pressure of O2 (P(sub I) O2). Nor do they account for the complex reality of alveolar gas composition as defined by the Alveolar Gas Equation. In essence, by providing iso-pO2 conditions for normobaric hypoxia (NH) as for HH exposures the devices ignore P(sub A)O2 and P(sub A)CO2 as more direct agents to induce signs and symptoms of hypoxia during acute training exposures. RESULTS. There is not a sufficient integrated physiological understanding of the determinants of P(sub A)O2 and P(sub A)CO2 under acute NH and HH given the same hypoxic pO2 to claim a device that provides isohypoxia. Isohypoxia is defined as the same distribution of hypoxia signs and symptoms under any circumstances of equivalent hypoxic dose, and hypoxic pO2 is an incomplete hypoxic dose. Some devices that claim an equivalent HH experience under NH conditions significantly overestimate the HH condition, especially when simulating altitudes above 10,000 feet (3,048 m). CONCLUSIONS. At best, the claim should be that the devices provide an approximate HH experience since they only duplicate the ambient pO2 at sea level as at altitude (iso-pO2 machines). An approach to reduce the overestimation is to at least provide machines that create the same P(sub I)O2 (iso-P(sub I)O2 machines) conditions at sea level as at the target altitude, a simple software upgrade.

  1. The efficiency of Flory approximation

    International Nuclear Information System (INIS)

    Obukhov, S.P.

    1984-01-01

    The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)

  2. Behavioural responses to human-induced change: Why fishing should not be ignored.

    Science.gov (United States)

    Diaz Pauli, Beatriz; Sih, Andrew

    2017-03-01

    Change in behaviour is usually the first response to human-induced environmental change and key for determining whether a species adapts to environmental change or becomes maladapted. Thus, understanding the behavioural response to human-induced changes is crucial in the interplay between ecology, evolution, conservation and management. Yet the behavioural response to fishing activities has been largely ignored. We review studies contrasting how fish behaviour affects catch by passive (e.g., long lines, angling) versus active gears (e.g., trawls, seines). We show that fishing not only targets certain behaviours, but it leads to a multitrait response including behavioural, physiological and life-history traits with population, community and ecosystem consequences. Fisheries-driven change (plastic or evolutionary) of fish behaviour and its correlated traits could impact fish populations well beyond their survival per se , affecting predation risk, foraging behaviour, dispersal, parental care, etc., and hence numerous ecological issues including population dynamics and trophic cascades . In particular, we discuss implications of behavioural responses to fishing for fisheries management and population resilience. More research on these topics, however, is needed to draw general conclusions, and we suggest fruitful directions for future studies.

  3. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  4. Bent approximations to synchrotron radiation optics

    International Nuclear Information System (INIS)

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors

  5. INTOR cost approximation

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1980-01-01

    A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de

  6. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  7. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  8. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  9. Reassessing insurers' access to genetic information: genetic privacy, ignorance, and injustice.

    Science.gov (United States)

    Feiring, Eli

    2009-06-01

    Many countries have imposed strict regulations on the genetic information to which insurers have access. Commentators have warned against the emerging body of legislation for different reasons. This paper demonstrates that, when confronted with the argument that genetic information should be available to insurers for health insurance underwriting purposes, one should avoid appeals to rights of genetic privacy and genetic ignorance. The principle of equality of opportunity may nevertheless warrant restrictions. A choice-based account of this principle implies that it is unfair to hold people responsible for the consequences of the genetic lottery, since we have no choice in selecting our genotype or the expression of it. However appealing, this view does not take us all the way to an adequate justification of inaccessibility of genetic information. A contractarian account, suggesting that health is a condition of opportunity and that healthcare is an essential good, seems more promising. I conclude that if or when predictive medical tests (such as genetic tests) are developed with significant actuarial value, individuals have less reason to accept as fair institutions that limit access to healthcare on the grounds of risk status. Given the assumption that a division of risk pools in accordance with a rough estimate of people's level of (genetic) risk will occur, fairness and justice favour universal health insurance based on solidarity.

  10. Beyond duplicity and ignorance in global fisheries

    Directory of Open Access Journals (Sweden)

    Daniel Pauly

    2009-06-01

    Full Text Available The three decades following World War II were a period of rapidly increasing fishing effort and landings, but also of spectacular collapses, particularly in small pelagic fish stocks. This is also the period in which a toxic triad of catch underreporting, ignoring scientific advice and blaming the environment emerged as standard response to ongoing fisheries collapses, which became increasingly more frequent, finally engulfing major North Atlantic fisheries. The response to the depletion of traditional fishing grounds was an expansion of North Atlantic (and generally of northern hemisphere fisheries in three dimensions: southward, into deeper waters and into new taxa, i.e. catching and marketing species of fish and invertebrates previously spurned, and usually lower in the food web. This expansion provided many opportunities for mischief, as illustrated by the European Union’s negotiated ‘agreements’ for access to the fish resources of Northwest Africa, China’s agreement-fee exploitation of the same, and Japan blaming the resulting resource declines on the whales. Also, this expansion provided new opportunities for mislabelling seafood unfamiliar to North Americans and Europeans, and misleading consumers, thus reducing the impact of seafood guides and similar effort toward sustainability. With fisheries catches declining, aquaculture—despite all public relation efforts—not being able to pick up the slack, and rapidly increasing fuel prices, structural changes are to be expected in both the fishing industry and the scientific disciplines that study it and influence its governance. Notably, fisheries biology, now predominantly concerned with the welfare of the fishing industry, will have to be converted into fisheries conservation science, whose goal will be to resolve the toxic triad alluded to above, and thus maintain the marine biodiversity and ecosystems that provide existential services to fisheries. Similarly, fisheries

  11. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  12.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  13. Exact and approximate multiple diffraction calculations

    International Nuclear Information System (INIS)

    Alexander, Y.; Wallace, S.J.; Sparrow, D.A.

    1976-08-01

    A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation

  14. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  15. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  16. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  17. Comparison of the Born series and rational approximants in potential scattering. [Pade approximants, Yikawa and exponential potential

    Energy Technology Data Exchange (ETDEWEB)

    Garibotti, C R; Grinstein, F F [Rosario Univ. Nacional (Argentina). Facultad de Ciencias Exactas e Ingenieria

    1976-05-08

    It is discussed the real utility of Born series for the calculation of atomic collision processes in the Born approximation. It is suggested to make use of Pade approximants and it is shown that this approach provides very fast convergent sequences over all the energy range studied. Yukawa and exponential potential are explicitly considered and the results are compared with high-order Born approximation.

  18. Spherical Approximation on Unit Sphere

    Directory of Open Access Journals (Sweden)

    Eman Samir Bhaya

    2018-01-01

    Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of  functions in  spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in    spaces for  by modulus of smoothness of functions.

  19. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  20. Ancilla-approximable quantum state transformations

    International Nuclear Information System (INIS)

    Blass, Andreas; Gurevich, Yuri

    2015-01-01

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation

  1. Ancilla-approximable quantum state transformations

    Energy Technology Data Exchange (ETDEWEB)

    Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  2. Recognition of computerized facial approximations by familiar assessors.

    Science.gov (United States)

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  3. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...

  4. Physical explanation of the SLIPI technique by the large scatterer approximation of the RTE

    International Nuclear Information System (INIS)

    Kristensson, Elias; Kristensson, Gerhard

    2017-01-01

    Visualizing the interior of a turbid scattering media by means light-based methods is not a straightforward task because of multiple light scattering, which generates image blur. To overcome this issue, a technique called Structured Laser Illumination Planar Imaging (SLIPI) was developed within the field of spray imaging. The method is based on a ‘light coding’ strategy to distinguish between directly and multiply scattered light, allowing the intensity from the latter to be suppressed by means of data post-processing. Recently, the performance of the SLIPI technique was investigated, during which deviations from theoretical predictions were discovered. In this paper, we aim to explain the origin of these deviations, and to achieve this end, we have performed several SLIPI measurements under well-controlled conditions. Our experimental results are compared with a theoretical model that is based on the large scatterer approximation of the Radiative Transfer Equation but modified according to certain constraints. Specifically, our model is designed to (1) ignore all off-axis intensity contributions, (2) to treat unperturbed- and forward-scattered light equally and (3) to accept light to scatter within a narrow forward-cone as we believe these are the rules governing the SLIPI technique. The comparison conclusively shows that optical measurements based on scattering and/or attenuation in turbid media can be subject to significant errors if not all aspects of light-matter interactions are considered. Our results indicate, as were expected, that forward-scattering can lead to deviations between experiments and theoretical predictions, especially when probing relatively large particles. Yet, the model also suggests that the spatial frequency of the superimposed ‘light code’ as well as the spreading of the light-probe are important factors one also needs to consider. The observed deviations from theoretical predictions could, however, potentially be exploited to

  5. An approximation for kanban controlled assembly systems

    NARCIS (Netherlands)

    Topan, E.; Avsar, Z.M.

    2011-01-01

    An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated

  6. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  7. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  8. Desiccator Volume: A Vital Yet Ignored Parameter in CaCO3 Crystallization by the Ammonium Carbonate Diffusion Method

    Directory of Open Access Journals (Sweden)

    Joe Harris

    2017-07-01

    Full Text Available Employing the widely used ammonium carbonate diffusion method, we demonstrate that altering an extrinsic parameter—desiccator size—which is rarely detailed in publications, can alter the route of crystallization. Hexagonally packed assemblies of spherical magnesium-calcium carbonate particles or spherulitic aragonitic particles can be selectively prepared from the same initial reaction solution by simply changing the internal volume of the desiccator, thereby changing the rate of carbonate addition and consequently precursor formation. This demonstrates that it is not merely the quantity of an additive which can control particle morphogenesis and phase selectivity, but control of other often ignored parameters are vital to ensure adequate reproducibility.

  9. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  10. Improved Dutch Roll Approximation for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Liang-Liang Yin

    2014-06-01

    Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.

  11. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  12. Honing in on the Social Difficulties Associated With Sluggish Cognitive Tempo in Children: Withdrawal, Peer Ignoring, and Low Engagement.

    Science.gov (United States)

    Becker, Stephen P; Garner, Annie A; Tamm, Leanne; Antonini, Tanya N; Epstein, Jeffery N

    2017-03-13

    Sluggish cognitive tempo (SCT) symptoms are associated with social difficulties in children, though findings are mixed and many studies have used global measures of social impairment. The present study tested the hypothesis that SCT would be uniquely associated with aspects of social functioning characterized by withdrawal and isolation, whereas attention deficit/hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD) symptoms would be uniquely associated with aspects of social functioning characterized by inappropriate responding in social situations and active peer exclusion. Participants were 158 children (70% boys) between 7-12 years of age being evaluated for possible ADHD. Both parents and teachers completed measures of SCT, ADHD, ODD, and internalizing (anxiety/depression) symptoms. Parents also completed ratings of social engagement and self-control. Teachers also completed measures assessing asociality and exclusion, as well as peer ignoring and dislike. In regression analyses controlling for demographic characteristics and other psychopathology symptoms, parent-reported SCT symptoms were significantly associated with lower social engagement (e.g., starting conversations, joining activities). Teacher-reported SCT symptoms were significantly associated with greater asociality/withdrawal and ratings of more frequent ignoring by peers, as well as greater exclusion. ODD symptoms and ADHD hyperactive-impulsive symptoms were more consistently associated with other aspects of social behavior, including peer exclusion, being disliked by peers, and poorer self-control during social situations. Findings provide the clearest evidence to date that the social difficulties associated with SCT are primarily due to withdrawal, isolation, and low initiative in social situations. Social skills training interventions may be effective for children displaying elevated SCT symptomatology.

  13. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  14. On Nash-Equilibria of Approximation-Stable Games

    Science.gov (United States)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  15. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  16. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  17. Local density approximations for relativistic exchange energies

    International Nuclear Information System (INIS)

    MacDonald, A.H.

    1986-01-01

    The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented

  18. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  19. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  20. Approximating centrality in evolving graphs: toward sublinearity

    Science.gov (United States)

    Priest, Benjamin W.; Cybenko, George

    2017-05-01

    The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.

  1. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  2. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...

  3. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.

    2008-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  4. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.; Holub, J.; Zdárek, J.

    2006-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  5. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  6. Rational approximations for tomographic reconstructions

    International Nuclear Information System (INIS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-01-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)

  7. 'LTE-diffusion approximation' for arc calculations

    International Nuclear Information System (INIS)

    Lowke, J J; Tanaka, M

    2006-01-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode

  8. Nonlinear approximation with general wave packets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...

  9. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are

  10. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  11. Approximation for the adjoint neutron spectrum

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)

  12. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  13. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-01-01

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  14. Quirks of Stirling's Approximation

    Science.gov (United States)

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  15. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  16. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  17. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  18. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  19. Evaluation of the dynamic responses of high rise buildings with respect to the direct methods for soil-foundation-structure interaction effects and comparison with the approximate methods

    Directory of Open Access Journals (Sweden)

    Jahangir Khazaei

    2017-08-01

    Full Text Available In dynamic analysis, modeling of soil medium is ignored because of the infinity and complexity of the soil behavior and so the important effects of these terms are neglected, while the behavior of the soil under the structure plays an important role in the response of the structure during an earthquake. In fact, the soil layers and soil foundation structure interaction phenomena can increase the applied seismic forces during earthquakes that has been examined with different methods. In this paper, effects of soil foundation structure interaction on a steel high rise building has been modeled using Abaqus software for nonlinear dynamic analysis with finite element direct method and simulation of infinite boundary condition for soil medium and also approximate Cone model. In the direct method, soil, structure and foundation are modeled altogether. In other hand, for using Cone model as a simple model, dynamic stiffness coefficients have been employed to simulate soil with considering springs and dashpots in all degree of freedom. The results show that considering soil foundation structure interaction cause increase in maximum lateral displacement of structure and the friction coefficient of soil-foundation interface can alter the responses of structure. It was also observed that the results of the approximate methods have good agreement for engineering demands.

  20. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  1. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  2. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  3. Investigation of thermal energy transport from an anisotropic central heating element to the adjacent channels: A multipoint flux approximation

    KAUST Repository

    Salama, Amgad

    2015-02-01

    The problem of heat transfer from a central heating element pressed between two clad plates to cooling channels adjacent and outboard of the plates is investigated numerically. The aim of this work is to highlight the role of thermal conductivity anisotropy of the heating element and/or the encompassing plates on thermal energy transport to the fluid passing through the two channels. When the medium is anisotropic with respect to thermal conductivity; energy transport to the neighboring channels is no longer symmetric. This asymmetry in energy fluxes influence heat transfer to the coolant resulting in different patterns of temperature fields. In particular, it is found that the temperature fields are skewed towards the principal direction of anisotropy. In addition, the heat flux distributions along the edges of the heating element are also different as a manifestation of thermal conductivity anisotropy. Furthermore, the peak temperature at the channel walls change location and magnitude depending on the principal direction of anisotropy. Based on scaling arguments, it is found that, the ratio of width to the height of the heating system is a key parameter which can suggest when one may ignore the effect of the cross-diagonal terms of the full conductivity tensor. To account for anisotropy in thermal conductivity, the method of multipoint flux approximation (MPFA) is employed. Using this technique, it is possible to find a finite difference stencil which can handle full thermal conductivity tensor and in the same time enjoys the simplicity of finite difference approximation. Although the finite difference stencil based on MPFA is quite complex, in this work we apply the recently introduced experimenting field approach which construct the global problem automatically.

  4. Improved radiative corrections for (e,e'p) experiments: Beyond the peaking approximation and implications of the soft-photon approximation

    International Nuclear Information System (INIS)

    Weissbach, F.; Hencken, K.; Rohe, D.; Sick, I.; Trautmann, D.

    2006-01-01

    Analyzing (e,e ' p) experimental data involves corrections for radiative effects which change the interaction kinematics and which have to be carefully considered in order to obtain the desired accuracy. Missing momentum and energy due to bremsstrahlung have so far often been incorporated into the simulations and the experimental analyses using the peaking approximation. It assumes that all bremsstrahlung is emitted in the direction of the radiating particle. In this article we introduce a full angular Monte Carlo simulation method which overcomes this approximation. As a test, the angular distribution of the bremsstrahlung photons is reconstructed from H(e,e ' p) data. Its width is found to be underestimated by the peaking approximation and described much better by the approach developed in this work. The impact of the soft-photon approximation on the photon angular distribution is found to be minor as compared to the impact of the peaking approximation. (orig.)

  5. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  6. Toward a consistent random phase approximation based on the relativistic Hartree approximation

    International Nuclear Information System (INIS)

    Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.

    1992-01-01

    We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data

  7. That Escalated Quickly—Planning to Ignore RPE Can Backfire

    Directory of Open Access Journals (Sweden)

    Maik Bieleke

    2017-09-01

    Full Text Available Ratings of perceived exertion (RPE are routinely assessed in exercise science and RPE is substantially associated with physiological criterion measures. According to the psychobiological model of endurance, RPE is a central limiting factor in performance. While RPE is known to be affected by psychological manipulations, it remains to be examined whether RPE can be self-regulated during static muscular endurance exercises to enhance performance. In this experiment, we investigate the effectiveness of the widely used and recommended self-regulation strategy of if-then planning (i.e., implementation intentions in down-regulating RPE and improving performance in a static muscular endurance task. 62 female students (age: M = 23.7 years, SD = 4.0 were randomly assigned to an implementation intention or a control condition and performed a static muscular endurance task. They held two intertwined rings as long as possible while avoiding contacts between the rings. In the implementation intention condition, participants had an if-then plan: “If the task becomes too strenuous for me, then I ignore the strain and tell myself: Keep going!” Every 25 ± 10 s participants reported their RPE along with their perceived pain. Endurance performance was measured as time to failure, along with contact errors as a measure of performance quality. No differences emerged between implementation intention and control participants regarding time to failure and performance quality. However, mixed-effects model analyses revealed a significant Time-to-Failure × Condition interaction for RPE. Compared to the control condition, participants in the implementation intention condition reported substantially greater increases in RPE during the second half of the task and reached higher total values of RPE before task termination. A similar but weaker pattern evinced for perceived pain. Our results demonstrate that RPE during an endurance task can be self-regulated with if

  8. Experimental amplification of an entangled photon: what if the detection loophole is ignored?

    International Nuclear Information System (INIS)

    Pomarico, Enrico; Sanguinetti, Bruno; Sekatski, Pavel; Zbinden, Hugo; Gisin, Nicolas

    2011-01-01

    The experimental verification of quantum features, such as entanglement, at large scales is extremely challenging because of environment-induced decoherence. Indeed, measurement techniques for demonstrating the quantumness of multiparticle systems in the presence of losses are difficult to define, and if they are not sufficiently accurate they can provide wrong conclusions. We present a Bell test where one photon of an entangled pair is amplified and then detected by threshold detectors, whose signals undergo postselection. The amplification is performed by a classical machine, which produces a fully separable micro-macro state. However, by adopting such a technique one can surprisingly observe a violation of the Clauser-Horne-Shimony-Holt inequality. This is due to the fact that ignoring the detection loophole opened by the postselection and the system losses can lead to misinterpretations, such as claiming micro-macro entanglement in a setup where evidently it is not present. By using threshold detectors and postselection, one can only infer the entanglement of the initial pair of photons, and so micro-micro entanglement, as is further confirmed by the violation of a nonseparability criterion for bipartite systems. How to detect photonic micro-macro entanglement in the presence of losses with the currently available technology remains an open question.

  9. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  10. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  11. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2010 Mathematics Subject Classification. 46L07. 1. Introduction. Given a countable discrete group G, some nice approximation properties for the reduced. C∗-algebras C∗ r (G) can give us the approximation properties of G. For example, Lance. [7] proved that the nuclearity of C∗ r (G) is equivalent to the amenability of G; ...

  12. Derivation of Electromagnetism from the Elastodynamics of the Spacetime Continuum

    Directory of Open Access Journals (Sweden)

    Millette P. A.

    2013-04-01

    Full Text Available We derive Electromagnetism from the Elastodynamics of the Spacetime Continuum based on the identification of the theory’s antisymmetric rotation tensor with the elec- tromagnetic field-strength tensor. The theory provides a physical explanation of the electromagnetic potential, which arises from transverse ( shearing displacements of the spacetime continuum, in contrast to mass which arises from longitudinal (dilatational displacements. In addition, the theory provides a physical explanation of the current density four-vector, as the 4-gradient of the volume dilatation of the spacetime con- tinuum. The Lorentz condition is obtained directly from the theory. In addition, we obtain a generalization of Electromagnetism for the situation where a volume force is present, in the general non-macroscopic case. Maxwell’s equations are found to remain unchanged, but the current density has an additional term proportional to the volume force.

  13. Approximate number word knowledge before the cardinal principle.

    Science.gov (United States)

    Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C

    2015-02-01

    Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  15. Pawlak algebra and approximate structure on fuzzy lattice.

    Science.gov (United States)

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.

  16. Dynamical cluster approximation plus semiclassical approximation study for a Mott insulator and d-wave pairing

    Science.gov (United States)

    Kim, SungKun; Lee, Hunpyo

    2017-06-01

    Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.

  17. Ignorance is no excuse for directors minimizing information asymmetry affecting boards

    Directory of Open Access Journals (Sweden)

    Eythor Ivar Jonsson

    2006-11-01

    Full Text Available This paper looks at information asymmetry at the board level and how lack of information has played a part in undermining the power of the board of directors. Information is power, and at board level, information is essential to keep the board knowledgeable about the failures and successes of the organization that it is supposed to govern. Although lack of information has become a popular excuse for boards, the mantra could –and should –be changing to, “Ignorance is no excuse” (Mueller, 1993. This paper explores some of these information system solutions that have the aim of resolving some of the problems of information asymmetry. Furthermore, three case studies are used to explore the problem of asymmetric information at board level and the how the boards are trying to solve the problem. The focus of the discussion is to a describe how directors experience the information asymmetry and if they find it troublesome, b how important information is for the control and strategy role of the board and c find out how boards can minimize the problem of asymmetric information. The research is conducted through semi-structured interviews with directors, managers and accountants. This paper offers an interesting exploration into information, or the lack of information, at board level. It describes both from a theoretical and practical viewpoint the problem of information asymmetry at board level and how companies are trying to solve this problem. It is an issue that has only been lightly touched upon in the corporate governance literature but is likely to attract more attention and research in the future.

  18. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  19. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  20. Uniform analytic approximation of Wigner rotation matrices

    Science.gov (United States)

    Hoffmann, Scott E.

    2018-02-01

    We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.

  1. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  2. Approximation Properties of Certain Summation Integral Type Operators

    Directory of Open Access Journals (Sweden)

    Patel P.

    2015-03-01

    Full Text Available In the present paper, we study approximation properties of a family of linear positive operators and establish direct results, asymptotic formula, rate of convergence, weighted approximation theorem, inverse theorem and better approximation for this family of linear positive operators.

  3. Semiclassical initial value approximation for Green's function.

    Science.gov (United States)

    Kay, Kenneth G

    2010-06-28

    A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.

  4. The adiabatic approximation in multichannel scattering

    International Nuclear Information System (INIS)

    Schulte, A.M.

    1978-01-01

    Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)

  5. Minimal entropy approximation for cellular automata

    International Nuclear Information System (INIS)

    Fukś, Henryk

    2014-01-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)

  6. Commentary: Ignorance as Bias: Radiolab, Yellow Rain, and “The Fact of the Matter”

    Directory of Open Access Journals (Sweden)

    Paul Hillmer

    2017-12-01

    Full Text Available In 2012 the National Public Radio show “Radiolab” released a podcast (later broadcast on air essentially asserting that Hmong victims of a suspected chemical agent known as “yellow rain” were ignorant of their surroundings and the facts, and were merely victims of exposure, dysentery, tainted water, and other natural causes. Relying heavily on the work of Dr. Matthew Meselson, Dr. Thomas Seeley, and former CIA officer Merle Pribbenow, Radiolab asserted that Hmong victims mistook bee droppings, defecated en masse from flying Asian honey bees, as “yellow rain.” They brought their foregone conclusions to an interview with Eng Yang, a self-described yellow rain survivor, and his niece, memoirist Kao Kalia Yang, who served as translator. The interview went horribly wrong when their dogged belief in the “bee dung hypothesis” was met with stiff and ultimately impassioned opposition. Radiolab’s confirmation bias led them to dismiss contradictory scientific evidence and mislead their audience. While the authors remain agnostic about the potential use of yellow rain in Southeast Asia, they believe the evidence shows that further study is needed before a final conclusion can be reached.

  7. Function approximation using combined unsupervised and supervised learning.

    Science.gov (United States)

    Andras, Peter

    2014-03-01

    Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.

  8. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  9. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  10. Face Recognition using Approximate Arithmetic

    DEFF Research Database (Denmark)

    Marso, Karol

    Face recognition is image processing technique which aims to identify human faces and found its use in various different fields for example in security. Throughout the years this field evolved and there are many approaches and many different algorithms which aim to make the face recognition as effective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....

  11. Stochastic quantization and mean field approximation

    International Nuclear Information System (INIS)

    Jengo, R.; Parga, N.

    1983-09-01

    In the context of the stochastic quantization we propose factorized approximate solutions for the Fokker-Planck equation for the XY and Zsub(N) spin systems in D dimensions. The resulting differential equation for a factor can be solved and it is found to give in the limit of t→infinity the mean field or, in the more general case, the Bethe-Peierls approximation. (author)

  12. Approximative solutions of stochastic optimization problem

    Czech Academy of Sciences Publication Activity Database

    Lachout, Petr

    2010-01-01

    Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf

  13. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  14. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  15. Thin-wall approximation in vacuum decay: A lemma

    Science.gov (United States)

    Brown, Adam R.

    2018-05-01

    The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.

  16. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  17. An improved saddlepoint approximation.

    Science.gov (United States)

    Gillespie, Colin S; Renshaw, Eric

    2007-08-01

    Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.

  18. Topology, calculus and approximation

    CERN Document Server

    Komornik, Vilmos

    2017-01-01

    Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...

  19. Comparison of four support-vector based function approximators

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2004-01-01

    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been

  20. Coefficients Calculation in Pascal Approximation for Passive Filter Design

    Directory of Open Access Journals (Sweden)

    George B. Kasapoglu

    2018-02-01

    Full Text Available The recently modified Pascal function is further exploited in this paper in the design of passive analog filters. The Pascal approximation has non-equiripple magnitude, in contrast of the most well-known approximations, such as the Chebyshev approximation. A novelty of this work is the introduction of a precise method that calculates the coefficients of the Pascal function. Two examples are presented for the passive design to illustrate the advantages and the disadvantages of the Pascal approximation. Moreover, the values of the passive elements can be taken from tables, which are created to define the normalized values of these elements for the Pascal approximation, as Zverev had done for the Chebyshev, Elliptic, and other approximations. Although Pascal approximation can be implemented to both passive and active filter designs, a passive filter design is addressed in this paper, and the benefits and shortcomings of Pascal approximation are presented and discussed.

  1. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  2. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  3. Conditional Density Approximations with Mixtures of Polynomials

    DEFF Research Database (Denmark)

    Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre

    2015-01-01

    Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...

  4. Mathematical analysis, approximation theory and their applications

    CERN Document Server

    Gupta, Vijay

    2016-01-01

    Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

  5. On Love's approximation for fluid-filled elastic tubes

    International Nuclear Information System (INIS)

    Caroli, E.; Mainardi, F.

    1980-01-01

    A simple procedure is set up to introduce Love's approximation for wave propagation in thin-walled fluid-filled elastic tubes. The dispersion relation for linear waves and the radial profile for fluid pressure are determined in this approximation. It is shown that the Love approximation is valid in the low-frequency regime. (author)

  6. WKB approximation in atomic physics

    International Nuclear Information System (INIS)

    Karnakov, Boris Mikhailovich

    2013-01-01

    Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.

  7. SFU-driven transparent approximation acceleration on GPUs

    NARCIS (Netherlands)

    Li, A.; Song, S.L.; Wijtvliet, M.; Kumar, A.; Corporaal, H.

    2016-01-01

    Approximate computing, the technique that sacrifices certain amount of accuracy in exchange for substantial performance boost or power reduction, is one of the most promising solutions to enable power control and performance scaling towards exascale. Although most existing approximation designs

  8. Approximate Networking for Universal Internet Access

    Directory of Open Access Journals (Sweden)

    Junaid Qadir

    2017-12-01

    Full Text Available Despite the best efforts of networking researchers and practitioners, an ideal Internet experience is inaccessible to an overwhelming majority of people the world over, mainly due to the lack of cost-efficient ways of provisioning high-performance, global Internet. In this paper, we argue that instead of an exclusive focus on a utopian goal of universally accessible “ideal networking” (in which we have a high throughput and quality of service as well as low latency and congestion, we should consider providing “approximate networking” through the adoption of context-appropriate trade-offs. In this regard, we propose to leverage the advances in the emerging trend of “approximate computing” that rely on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power, and performance efficiency of systems by orders of magnitude by embracing output errors in resilient applications. Furthermore, we propose to extend the dimensions of approximate computing towards various knobs available at network layers. Approximate networking can be used to provision “Global Access to the Internet for All” (GAIA in a pragmatically tiered fashion, in which different users around the world are provided a different context-appropriate (but still contextually functional Internet experience.

  9. Variational Gaussian approximation for Poisson data

    Science.gov (United States)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  10. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    W. Romeijnders; L. Stougie (Leen); M. van der Vlerk

    2014-01-01

    htmlabstractApproximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value.

  11. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    Romeijnders, W.; Stougie, L.; van der Vlerk, M.H.

    2014-01-01

    Approximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value. However,

  12. Magnus approximation in the adiabatic picture

    International Nuclear Information System (INIS)

    Klarsfeld, S.; Oteo, J.A.

    1991-01-01

    A simple approximate nonperturbative method is described for treating time-dependent problems that works well in the intermediate regime far from both the sudden and the adiabatic limits. The method consists of applying the Magnus expansion after transforming to the adiabatic basis defined by the eigenstates of the instantaneous Hamiltonian. A few exactly soluble examples are considered in order to assess the domain of validity of the approximation. (author) 32 refs., 4 figs

  13. On the practice of ignoring center-patient interactions in evaluating hospital performance.

    Science.gov (United States)

    Varewyck, Machteld; Vansteelandt, Stijn; Eriksson, Marie; Goetghebeur, Els

    2016-01-30

    We evaluate the performance of medical centers based on a continuous or binary patient outcome (e.g., 30-day mortality). Common practice adjusts for differences in patient mix through outcome regression models, which include patient-specific baseline covariates (e.g., age and disease stage) besides center effects. Because a large number of centers may need to be evaluated, the typical model postulates that the effect of a center on outcome is constant over patient characteristics. This may be violated, for example, when some centers are specialized in children or geriatric patients. Including interactions between certain patient characteristics and the many fixed center effects in the model increases the risk for overfitting, however, and could imply a loss of power for detecting centers with deviating mortality. Therefore, we assess how the common practice of ignoring such interactions impacts the bias and precision of directly and indirectly standardized risks. The reassuring conclusion is that the common practice of working with the main effects of a center has minor impact on hospital evaluation, unless some centers actually perform substantially better on a specific group of patients and there is strong confounding through the corresponding patient characteristic. The bias is then driven by an interplay of the relative center size, the overlap between covariate distributions, and the magnitude of the interaction effect. Interestingly, the bias on indirectly standardized risks is smaller than on directly standardized risks. We illustrate our findings by simulation and in an analysis of 30-day mortality on Riksstroke. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  14. IgM nephropathy; can we still ignore it.

    Science.gov (United States)

    Vanikar, Aruna

    2013-04-01

    IgM nephropathy (IgMN) is a relatively less recognized clinico-immunopathological entity in the domain of glomerulonephritis , often thought to be a bridge between minimal change disease and focal segmental glomerulosclerosis. Directory of Open Access Journals (DOAJ), Google Scholar, Pubmed (NLM), LISTA (EBSCO) and Web of Science has been searched. IgM nephropathy can present as nephritic syndrome or less commonly with subnephrotic proteinuria or rarely hematuria. About 30% patients respond to steroids whereas others are steroid dependent / resistant. They should be given a trial of Rituximab or stem cell therapy. IgM nephropathy (IgMN) is an important and rather neglected pathology responsible for renal morbidity in children and adults in developing countries as compared to developed nations with incidence of 2-18.5% of native biopsies. Abnormal T-cell function with hyperfunctioning suppressor T-cells are believed to be responsible for this disease entity. Approximately one third of the patients are steroid responders where as the remaining two thirds are steroid resistant or dependent. Therapeutic trials including cell therapies targeting suppressor T-cells are required.

  15. Space-efficient path-reporting approximate distance oracles

    DEFF Research Database (Denmark)

    Elkin, Michael; Neiman, Ofer; Wulff-Nilsen, Christian

    2016-01-01

    We consider approximate path-reporting distance oracles, distance labeling and labeled routing with extremely low space requirements, for general undirected graphs. For distance oracles, we show how to break the nlog⁡n space bound of Thorup and Zwick if approximate paths rather than distances need...

  16. Aspects of three field approximations: Darwin, frozen, EMPULSE

    International Nuclear Information System (INIS)

    Boyd, J.K.; Lee, E.P.; Yu, S.S.

    1985-01-01

    The traditional approach used to study high energy beam propagation relies on the frozen field approximation. A minor modification of the frozen field approximation yields the set of equations applied to the analysis of the hose instability. These models are constrasted with the Darwin field approximation. A statement is made of the Darwin model equations relevant to the analysis of the hose instability

  17. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  18. All-Norm Approximation Algorithms

    NARCIS (Netherlands)

    Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik

    2002-01-01

    A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation

  19. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  20. Approximate Noether symmetries and collineations for regular perturbative Lagrangians

    Science.gov (United States)

    Paliathanasis, Andronikos; Jamal, Sameerah

    2018-01-01

    Regular perturbative Lagrangians that admit approximate Noether symmetries and approximate conservation laws are studied. Specifically, we investigate the connection between approximate Noether symmetries and collineations of the underlying manifold. In particular we determine the generic Noether symmetry conditions for the approximate point symmetries and we find that for a class of perturbed Lagrangians, Noether symmetries are related to the elements of the Homothetic algebra of the metric which is defined by the unperturbed Lagrangian. Moreover, we discuss how exact symmetries become approximate symmetries. Finally, some applications are presented.

  1. Square well approximation to the optical potential

    International Nuclear Information System (INIS)

    Jain, A.K.; Gupta, M.C.; Marwadi, P.R.

    1976-01-01

    Approximations for obtaining T-matrix elements for a sum of several potentials in terms of T-matrices for individual potentials are studied. Based on model calculations for S-wave for a sum of two separable non-local potentials of Yukawa type form factors and a sum of two delta function potentials, it is shown that the T-matrix for a sum of several potentials can be approximated satisfactorily over all the energy regions by the sum of T-matrices for individual potentials. Based on this, an approximate method for finding T-matrix for any local potential by approximating it by a sum of suitable number of square wells is presented. This provides an interesting way to calculate the T-matrix for any arbitary potential in terms of Bessel functions to a good degree of accuracy. The method is applied to the Saxon-Wood potentials and good agreement with exact results is found. (author)

  2. Uncertainty relations for approximation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-05-27

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  3. Uncertainty relations for approximation and estimation

    International Nuclear Information System (INIS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-01-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  4. Diophantine approximation

    CERN Document Server

    Schmidt, Wolfgang M

    1980-01-01

    "In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)

  5. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  6. Approximation of the inverse G-frame operator

    Indian Academy of Sciences (India)

    ... projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.

  7. Optical approximation in the theory of geometric impedance

    International Nuclear Information System (INIS)

    Stupakov, G.; Bane, K.L.F.; Zagorodnov, I.

    2007-02-01

    In this paper we introduce an optical approximation into the theory of impedance calculation, one valid in the limit of high frequencies. This approximation neglects diffraction effects in the radiation process, and is conceptually equivalent to the approximation of geometric optics in electromagnetic theory. Using this approximation, we derive equations for the longitudinal impedance for arbitrary offsets, with respect to a reference orbit, of source and test particles. With the help of the Panofsky-Wenzel theorem we also obtain expressions for the transverse impedance (also for arbitrary offsets). We further simplify these expressions for the case of the small offsets that are typical for practical applications. Our final expressions for the impedance, in the general case, involve two dimensional integrals over various cross-sections of the transition. We further demonstrate, for several known axisymmetric examples, how our method is applied to the calculation of impedances. Finally, we discuss the accuracy of the optical approximation and its relation to the diffraction regime in the theory of impedance. (orig.)

  8. APPROXIMATION OF FREE-FORM CURVE – AIRFOIL SHAPE

    Directory of Open Access Journals (Sweden)

    CHONG PERK LIN

    2013-12-01

    Full Text Available Approximation of free-form shape is essential in numerous engineering applications, particularly in automotive and aircraft industries. Commercial CAD software for the approximation of free-form shape is based almost exclusively on parametric polynomial and rational parametric polynomial. The parametric curve is defined by vector function of one independent variable R(u = (x(u, y(u, z(u, where 0≤u≤1. Bézier representation is one of the parametric functions, which is widely used in the approximating of free-form shape. Given a string of points with the assumption of sufficiently dense to characterise airfoil shape, it is desirable to approximate the shape with Bézier representation. The expectation is that the representation function is close to the shape within an acceptable working tolerance. In this paper, the aim is to explore the use of manual and automated methods for approximating section curve of airfoil with Bézier representation.

  9. Conference on Abstract Spaces and Approximation

    CERN Document Server

    Szökefalvi-Nagy, B; Abstrakte Räume und Approximation; Abstract spaces and approximation

    1969-01-01

    The present conference took place at Oberwolfach, July 18-27, 1968, as a direct follow-up on a meeting on Approximation Theory [1] held there from August 4-10, 1963. The emphasis was on theoretical aspects of approximation, rather than the numerical side. Particular importance was placed on the related fields of functional analysis and operator theory. Thirty-nine papers were presented at the conference and one more was subsequently submitted in writing. All of these are included in these proceedings. In addition there is areport on new and unsolved problems based upon a special problem session and later communications from the partici­ pants. A special role is played by the survey papers also presented in full. They cover a broad range of topics, including invariant subspaces, scattering theory, Wiener-Hopf equations, interpolation theorems, contraction operators, approximation in Banach spaces, etc. The papers have been classified according to subject matter into five chapters, but it needs littl...

  10. Kullback-Leibler divergence and the Pareto-Exponential approximation.

    Science.gov (United States)

    Weinberg, G V

    2016-01-01

    Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.

  11. Tidal Evolution of Asteroidal Binaries. Ruled by Viscosity. Ignorant of Rigidity.

    Science.gov (United States)

    Efroimsky, Michael

    2015-10-01

    This is a pilot paper serving as a launching pad for study of orbital and spin evolution of binary asteroids. The rate of tidal evolution of asteroidal binaries is defined by the dynamical Love numbers kl divided by quality factors Q. Common in the literature is the (oftentimes illegitimate) approximation of the dynamical Love numbers with their static counterparts. Since the static Love numbers are, approximately, proportional to the inverse rigidity, this renders a popular fallacy that the tidal evolution rate is determined by the product of the rigidity by the quality factor: {k}l/Q\\propto 1/(μ Q). In reality, the dynamical Love numbers depend on the tidal frequency and all rheological parameters of the tidally perturbed body (not just rigidity). We demonstrate that in asteroidal binaries the rigidity of their components plays virtually no role in tidal friction and tidal lagging, and thereby has almost no influence on the intensity of tidal interactions (tidal torques, tidal dissipation, tidally induced changes of the orbit). A key quantity that overwhelmingly determines the tidal evolution is a product of the effective viscosity η by the tidal frequency χ . The functional form of the torque’s dependence on this product depends on who wins in the competition between viscosity and self-gravitation. Hence a quantitative criterion, to distinguish between two regimes. For higher values of η χ , we get {k}l/Q\\propto 1/(η χ ), {while} for lower values we obtain {k}l/Q\\propto η χ . Our study rests on an assumption that asteroids can be treated as Maxwell bodies. Applicable to rigid rocks at low frequencies, this approximation is used here also for rubble piles, due to the lack of a better model. In the future, as we learn more about mechanics of granular mixtures in a weak gravity field, we may have to amend the tidal theory with other rheological parameters, ones that do not show up in the description of viscoelastic bodies. This line of study provides

  12. Diagonal Pade approximations for initial value problems

    International Nuclear Information System (INIS)

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab

  13. A test of the adhesion approximation for gravitational clustering

    Science.gov (United States)

    Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.

    1993-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  14. Hydrogen: Beyond the Classic Approximation

    International Nuclear Information System (INIS)

    Scivetti, Ivan

    2003-01-01

    The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position

  15. Simultaneous approximation in scales of Banach spaces

    International Nuclear Information System (INIS)

    Bramble, J.H.; Scott, R.

    1978-01-01

    The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods

  16. On transparent potentials: a Born approximation study

    International Nuclear Information System (INIS)

    Coudray, C.

    1980-01-01

    In the frame of the scattering inverse problem at fixed energy, a class of potentials transparent in Born approximation is obtained. All these potentials are spherically symmetric and are oscillating functions of the reduced radial variable. Amongst them, the Born approximation of the transparent potential of the Newton-Sabatier method is found. In the same class, quasi-transparent potentials are exhibited. Very general features of potentials transparent in Born approximation are then stated. And bounds are given for the exact scattering amplitudes corresponding to most of the potentials previously exhibited. These bounds, obtained at fixed energy, and for large values of the angular momentum, are found to be independent on the energy

  17. Approximate supernova remnant dynamics with cosmic ray production

    Science.gov (United States)

    Voelk, H. J.; Drury, L. O.; Dorfi, E. A.

    1985-01-01

    Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probably sources of Cosmic Rays. Recent shock acceleration models treating the Cosmic Rays (CR's) as test particles nb a prescribed Supernova Remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the Interstellar Medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation.

  18. Approximate supernova remnant dynamics with cosmic ray production

    International Nuclear Information System (INIS)

    Voelk, H.J.; Drury, L.O.; Dorfi, E.A.

    1985-01-01

    Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probable sources of cosmic rays. Recent shock acceleration models treating the cosmic rays (CR's) as test particles nb a prescribed supernova remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the interstellar medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation

  19. Geometric convergence of some two-point Pade approximations

    International Nuclear Information System (INIS)

    Nemeth, G.

    1983-01-01

    The geometric convergences of some two-point Pade approximations are investigated on the real positive axis and on certain infinite sets of the complex plane. Some theorems concerning the geometric convergence of Pade approximations are proved, and bounds on geometric convergence rates are given. The results may be interesting considering the applications both in numerical computations and in approximation theory. As a specific case, the numerical calculations connected with the plasma dispersion function may be performed. (D.Gy.)

  20. Standard filter approximations for low power Continuous Wavelet Transforms.

    Science.gov (United States)

    Casson, Alexander J; Rodriguez-Villegas, Esther

    2010-01-01

    Analogue domain implementations of the Continuous Wavelet Transform (CWT) have proved popular in recent years as they can be implemented at very low power consumption levels. This is essential for use in wearable, long term physiological monitoring systems. Present analogue CWT implementations rely on taking mathematical a approximation of the wanted mother wavelet function to give a filter transfer function that is suitable for circuit implementation. This paper investigates the use of standard filter approximations (Butterworth, Chebyshev, Bessel) as an alternative wavelet approximation technique. This extends the number of approximation techniques available for generating analogue CWT filters. An example ECG analysis shows that signal information can be successfully extracted using these CWT approximations.

  1. Ordering, symbols and finite-dimensional approximations of path integrals

    International Nuclear Information System (INIS)

    Kashiwa, Taro; Sakoda, Seiji; Zenkin, S.V.

    1994-01-01

    We derive general form of finite-dimensional approximations of path integrals for both bosonic and fermionic canonical systems in terms of symbols of operators determined by operator ordering. We argue that for a system with a given quantum Hamiltonian such approximations are independent of the type of symbols up to terms of O(ε), where ε of is infinitesimal time interval determining the accuracy of the approximations. A new class of such approximations is found for both c-number and Grassmannian dynamical variables. The actions determined by the approximations are non-local and have no classical continuum limit except the cases of pq- and qp-ordering. As an explicit example the fermionic oscillator is considered in detail. (author)

  2. Hardness of approximation for strip packing

    DEFF Research Database (Denmark)

    Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin

    2017-01-01

    Strip packing is a classical packing problem, where the goal is to pack a set of rectangular objects into a strip of a given width, while minimizing the total height of the packing. The problem has multiple applications, for example, in scheduling and stock-cutting, and has been studied extensively......)-approximation by two independent research groups [FSTTCS 2016,WALCOM 2017]. This raises a questionwhether strip packing with polynomially bounded input data admits a quasi-polynomial time approximation scheme, as is the case for related twodimensional packing problems like maximum independent set of rectangles or two...

  3. Adaptive control using neural networks and approximate models.

    Science.gov (United States)

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  4. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Loss of deuterium in faecal solids and by sequestration in reindeer: effect on doubly labelled water studies

    Directory of Open Access Journals (Sweden)

    Geir Gotaas

    2000-03-01

    Full Text Available An underlying assumption when estimating total energy expenditure (TEE using doubly labelled water (DLW is that the injected isotopes (lsO and 2H leave the body only in the form of CO, and H20. However, both isotopes have additional routes of loss. We quantified the loss of 2H (i attached to faecal solids and (ii by sequestration into newly synthesised fat in reindeer (Rangifer tarandus tarandus. Estimates of the errors caused by these processes were applied to data from DLW studies with reindeer in summer and in winter. Given the net rate of faecal dry matter output and lipid synthesis in the present study, ignoring both sources of error caused the TEE of reindeer to be underestimated by approximately 5% in winter and approximately 9% in summer. The separate effect of each source of error was evaluated in summer. If ignored, loss of 2H through sequestration alone caused TEE to be underestimated by approximately 3.7%. Similarly, if ignored, loss of 2H attached to faecal solids alone caused TEE to be underestimated by approximately 5.9%.

  6. The Hartree-Fock seniority approximation

    International Nuclear Information System (INIS)

    Gomez, J.M.G.; Prieto, C.

    1986-01-01

    A new self-consistent method is used to take into account the mean-field and the pairing correlations in nuclei at the same time. We call it the Hartree-Fock seniority approximation, because the long-range and short-range correlations are treated in the frameworks of Hartree-Fock theory and the seniority scheme. The method is developed in detail for a minimum-seniority variational wave function in the coordinate representation for an effective interaction of the Skyrme type. An advantage of the present approach over the Hartree-Fock-Bogoliubov theory is the exact conservation of angular momentum and particle number. Furthermore, the computational effort required in the Hartree-Fock seniority approximation is similar to that ofthe pure Hartree-Fock picture. Some numerical calculations for Ca isotopes are presented. (orig.)

  7. Supersonic beams at high particle densities: model description beyond the ideal gas approximation.

    Science.gov (United States)

    Christen, Wolfgang; Rademann, Klaus; Even, Uzi

    2010-10-28

    Supersonic molecular beams constitute a very powerful technique in modern chemical physics. They offer several unique features such as a directed, collision-free flow of particles, very high luminosity, and an unsurpassed strong adiabatic cooling during the jet expansion. While it is generally recognized that their maximum flow velocity depends on the molecular weight and the temperature of the working fluid in the stagnation reservoir, not a lot is known on the effects of elevated particle densities. Frequently, the characteristics of supersonic beams are treated in diverse approximations of an ideal gas expansion. In these simplified model descriptions, the real gas character of fluid systems is ignored, although particle associations are responsible for fundamental processes such as the formation of clusters, both in the reservoir at increased densities and during the jet expansion. In this contribution, the various assumptions of ideal gas treatments of supersonic beams and their shortcomings are reviewed. It is shown in detail that a straightforward thermodynamic approach considering the initial and final enthalpy is capable of characterizing the terminal mean beam velocity, even at the liquid-vapor phase boundary and the critical point. Fluid properties are obtained using the most accurate equations of state available at present. This procedure provides the opportunity to naturally include the dramatic effects of nonideal gas behavior for a large variety of fluid systems. Besides the prediction of the terminal flow velocity, thermodynamic models of isentropic jet expansions permit an estimate of the upper limit of the beam temperature and the amount of condensation in the beam. These descriptions can even be extended to include spinodal decomposition processes, thus providing a generally applicable tool for investigating the two-phase region of high supersaturations not easily accessible otherwise.

  8. Quasi-fractional approximation to the Bessel functions

    International Nuclear Information System (INIS)

    Guerrero, P.M.L.

    1989-01-01

    In this paper the authors presents a simple Quasi-Fractional Approximation for Bessel Functions J ν (x), (- 1 ≤ ν < 0.5). This has been obtained by extending a method published which uses simultaneously power series and asymptotic expansions. Both functions, exact and approximated, coincide in at least two digits for positive x, and ν between - 1 and 0,4

  9. Scattering theory and effective medium approximations to heterogeneous materials

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1977-01-01

    The formal analogy existing between problems studied in the microscopic theory of disordered alloys and problems concerned with the effective (macroscopic) behavior of heterogeneous materials is discussed. Attention is focused on (1) analogous approximations (effective medium approximations) developed for the microscopic problems by scattering theory concepts and techniques, but for the macroscopic problems principally by intuitive means, (2) the link, provided by scattering theory, of the intuitively developed approximations to a well-defined perturbative analysis, (3) the possible presence of conditionally convergent integrals in effective medium approximations

  10. Approximate modal analysis using Fourier decomposition

    International Nuclear Information System (INIS)

    Kozar, Ivica; Jericevic, Zeljko; Pecak, Tatjana

    2010-01-01

    The paper presents a novel numerical approach for approximate solution of eigenvalue problem and investigates its suitability for modal analysis of structures with special attention on plate structures. The approach is based on Fourier transformation of the matrix equation into frequency domain and subsequent removal of potentially less significant frequencies. The procedure results in a much reduced problem that is used in eigenvalue calculation. After calculation eigenvectors are expanded and transformed back into time domain. The principles are presented in Jericevic [1]. Fourier transform can be formulated in a way that some parts of the matrix that should not be approximated are not transformed but are fully preserved. In this paper we present formulation that preserves central or edge parts of the matrix and compare it with the formulation that performs transform on the whole matrix. Numerical experiments on transformed structural dynamic matrices describe quality of the approximations obtained in modal analysis of structures. On the basis of the numerical experiments, from the three approaches to matrix reduction one is recommended.

  11. A Gaussian Approximation Potential for Silicon

    Science.gov (United States)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  12. Development of the relativistic impulse approximation

    International Nuclear Information System (INIS)

    Wallace, S.J.

    1985-01-01

    This talk contains three parts. Part I reviews the developments which led to the relativistic impulse approximation for proton-nucleus scattering. In Part II, problems with the impulse approximation in its original form - principally the low energy problem - are discussed and traced to pionic contributions. Use of pseudovector covariants in place of pseudoscalar ones in the NN amplitude provides more satisfactory low energy results, however, the difference between pseudovector and pseudoscalar results is ambiguous in the sense that it is not controlled by NN data. Only with further theoretical input can the ambiguity be removed. Part III of the talk presents a new development of the relativistic impulse approximation which is the result of work done in the past year and a half in collaboration with J.A. Tjon. A complete NN amplitude representation is developed and a complete set of Lorentz invariant amplitudes are calculated based on a one-meson exchange model and appropriate integral equations. A meson theoretical basis for the important pair contributions to proton-nucleus scattering is established by the new developments. 28 references

  13. Local approximation of a metapopulation's equilibrium.

    Science.gov (United States)

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  14. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  15. Pion-nucleus cross sections approximation

    International Nuclear Information System (INIS)

    Barashenkov, V.S.; Polanski, A.; Sosnin, A.N.

    1990-01-01

    Analytical approximation of pion-nucleus elastic and inelastic interaction cross-section is suggested, with could be applied in the energy range exceeding several dozens of MeV for nuclei heavier than beryllium. 3 refs.; 4 tabs

  16. Approximal morphology as predictor of approximal caries in primary molar teeth

    DEFF Research Database (Denmark)

    Cortes, A; Martignon, S; Qvist, V

    2018-01-01

    consent was given, participated. Upper and lower molar teeth of one randomly selected side received a 2-day temporarily separation. Bitewing radiographs and silicone impressions of interproximal area (IPA) were obtained. One-year procedures were repeated in 52 children (84%). The morphology of the distal...... surfaces of the first molar teeth and the mesial surfaces on the second molar teeth (n=208) was scored from the occlusal aspect on images from the baseline resin models resulting in four IPA variants: concave-concave; concave-convex; convex-concave, and convex-convex. Approximal caries on the surface...

  17. Finite Element Approximation of the FENE-P Model

    OpenAIRE

    Barrett , John ,; Boyaval , Sébastien

    2017-01-01

    We extend our analysis on the Oldroyd-B model in Barrett and Boyaval [1] to consider the finite element approximation of the FENE-P system of equations, which models a dilute polymeric fluid, in a bounded domain $D $\\subset$ R d , d = 2 or 3$, subject to no flow boundary conditions. Our schemes are based on approximating the pressure and the symmetric conforma-tion tensor by either (a) piecewise constants or (b) continuous piecewise linears. In case (a) the velocity field is approximated by c...

  18. Nuclear data processing, analysis, transformation and storage with Pade-approximants

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gay, E.V.; Guseynov, M.A.; Rabotnov, N.S.

    1992-01-01

    A method is described to generate rational approximants of high order with applications to neutron data handling. The problems considered are: The approximations of neutron cross-sections in resonance region producing the parameters for Adler-Adler type formulae; calculations of resulting rational approximants' errors given in analytical form allowing to compute the error at any energy point inside the interval of approximation; calculations of the correlation coefficient of error values in two arbitrary points provided that experimental errors are independent and normally distributed; a method of simultaneous generation of a few rational approximants with identical set of poles; functionals other than LSM; two-dimensional approximation. (orig.)

  19. Lattice quantum chromodynamics with approximately chiral fermions

    International Nuclear Information System (INIS)

    Hierl, Dieter

    2008-05-01

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the Θ + pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  20. Lattice quantum chromodynamics with approximately chiral fermions

    Energy Technology Data Exchange (ETDEWEB)

    Hierl, Dieter

    2008-05-15

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  1. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  2. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  3. Methods of Approximation Theory in Complex Analysis and Mathematical Physics

    CERN Document Server

    Saff, Edward

    1993-01-01

    The book incorporates research papers and surveys written by participants ofan International Scientific Programme on Approximation Theory jointly supervised by Institute for Constructive Mathematics of University of South Florida at Tampa, USA and the Euler International Mathematical Instituteat St. Petersburg, Russia. The aim of the Programme was to present new developments in Constructive Approximation Theory. The topics of the papers are: asymptotic behaviour of orthogonal polynomials, rational approximation of classical functions, quadrature formulas, theory of n-widths, nonlinear approximation in Hardy algebras,numerical results on best polynomial approximations, wavelet analysis. FROM THE CONTENTS: E.A. Rakhmanov: Strong asymptotics for orthogonal polynomials associated with exponential weights on R.- A.L. Levin, E.B. Saff: Exact Convergence Rates for Best Lp Rational Approximation to the Signum Function and for Optimal Quadrature in Hp.- H. Stahl: Uniform Rational Approximation of x .- M. Rahman, S.K. ...

  4. Beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2013-01-01

    We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...

  5. Vacancy-rearrangement theory in the first Magnus approximation

    International Nuclear Information System (INIS)

    Becker, R.L.

    1984-01-01

    In the present paper we employ the first Magnus approximation (M1A), a unitarized Born approximation, in semiclassical collision theory. We have found previously that the M1A gives a substantial improvement over the first Born approximation (B1A) and can give a good approximation to a full coupled channels calculation of the mean L-shell vacancy probability per electron, p/sub L/, when the L-vacancies are accompanied by a K-shell vacancy (p/sub L/ is obtained experimentally from measurements of K/sub α/-satellite intensities). For sufficiently strong projectile-electron interactions (sufficiently large Z/sub p/ or small v) the M1A ceases to reproduce the coupled channels results, but it is accurate over a much wider range of Z/sub p/ and v than the B1A. 27 references

  6. Minimax rational approximation of the Fermi-Dirac distribution

    Science.gov (United States)

    Moussa, Jonathan E.

    2016-10-01

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.

  7. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  8. Approximate Coulomb effects in the three-body scattering problem

    International Nuclear Information System (INIS)

    Haftel, M.I.; Zankel, H.

    1981-01-01

    From the momentum space Faddeev equations we derive approximate expressions which describe the Coulomb-nuclear interference in the three-body elastic scattering, rearrangement, and breakup problems and apply the formalism to p-d elastic scattering. The approximations treat the Coulomb interference as mainly a two-body effect, but we allow for the charge distribution of the deuteron in the p-d calculations. Real and imaginary parts of the Coulomb correction to the elastic scattering phase shifts are described in terms of on-shell quantities only. In the case of pure Coulomb breakup we recover the distorted-wave Born approximation result. Comparing the derived approximation with the full Faddeev p-d elastic scattering calculation, which includes the Coulomb force, we obtain good qualitative agreement in S and P waves, but disagreement in repulsive higher partial waves. The on-shell approximation investigated is found to be superior to other current approximations. The calculated differential cross sections at 10 MeV raise the question of whether there is a significant Coulomb-nuclear interference at backward angles

  9. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  10. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  11. Approximating the physical inner product of loop quantum cosmology

    International Nuclear Information System (INIS)

    Bahr, Benjamin; Thiemann, Thomas

    2007-01-01

    In this paper, we investigate the possibility of approximating the physical inner product of constrained quantum theories. In particular, we calculate the physical inner product of a simple cosmological model in two ways: firstly, we compute it analytically via a trick; secondly, we use the complexifier coherent states to approximate the physical inner product defined by the master constraint of the system. We find that the approximation is able to recover the analytic solution of the problem, which consolidates hopes that coherent states will help to approximate solutions of more complicated theories, like loop quantum gravity

  12. Polynomial approximation of functions in Sobolev spaces

    International Nuclear Information System (INIS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces

  13. Nernst effect beyond the relaxation-time approximation

    OpenAIRE

    Pikulin, D. I.; Hou, Chang-Yu; Beenakker, C. W. J.

    2011-01-01

    Motivated by recent interest in the Nernst effect in cuprate superconductors, we calculate this magneto-thermo-electric effect for an arbitrary (anisotropic) quasiparticle dispersion relation and elastic scattering rate. The exact solution of the linearized Boltzmann equation is compared with the commonly used relaxation-time approximation. We find qualitative deficiencies of this approximation, to the extent that it can get the sign wrong of the Nernst coefficient. Ziman's improvement of the...

  14. Approximate Reanalysis in Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...

  15. PEEL V HAMON J&C ENGINEERING (PTY LTD: Ignoring The Result-Requirement of Section 163(1(A of the Companies Act And Extending the Oppression Remedy Beyond its statutorily intended reach

    Directory of Open Access Journals (Sweden)

    HGJ Beukes

    2014-11-01

    Full Text Available This case note provides a concise and understandable version of the confusing facts in Peel v Hamon J&C Engineering (Pty Ltd, and deals with the remedy provided for in section 163 of the Companies Act (the oppression remedy. The importance of drawing a distinction between the application of this section and the orders that the Court can make to provide relief in terms of subsection (2 is explained, after which each requirement contained in subsection (1(a is analysed. With reference to the locus standi-requirement, it is indicated that the judgment is not to be regarded as authority for the contention that a shareholder or a director who wants to exercise the oppression remedy need not have been a shareholder or a director of the company at the time of the conduct. With reference to the conduct-requirement, it is indicated that it would have been more appropriate for the applicants to have made use of a remedy in terms of the law of contract. Most importantly, the result-requirement is indicated to have been ignored, as a lack of certainty that there will be a result is argued not to constitute a result. Ignoring the result-requirement is explained to have resulted in ignoring the detriment-requirement, in turn. Accordingly, it is concluded that the oppression remedy was utilised without the specified statutory criteria having been satisfied and that the applicants' interests were protected by a remedy which should not have found application under the circumstances, as this was beyond the remedy's statutorily intended reach.

  16. Approximate spatio-temporal top-k publish/subscribe

    KAUST Repository

    Chen, Lisi

    2018-04-26

    Location-based publish/subscribe plays a significant role in mobile information disseminations. In this light, we propose and study a novel problem of processing location-based top-k subscriptions over spatio-temporal data streams. We define a new type of approximate location-based top-k subscription, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription, that continuously feeds users with relevant spatio-temporal messages by considering textual similarity, spatial proximity, and information freshness. Different from existing location-based top-k subscriptions, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription can automatically adjust the triggering condition by taking the triggering score of other subscriptions into account. The group filtering efficacy can be substantially improved by sacrificing the publishing result quality with a bounded guarantee. We conduct extensive experiments on two real datasets to demonstrate the performance of the developed solutions.

  17. Approximate spatio-temporal top-k publish/subscribe

    KAUST Repository

    Chen, Lisi; Shang, Shuo

    2018-01-01

    Location-based publish/subscribe plays a significant role in mobile information disseminations. In this light, we propose and study a novel problem of processing location-based top-k subscriptions over spatio-temporal data streams. We define a new type of approximate location-based top-k subscription, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription, that continuously feeds users with relevant spatio-temporal messages by considering textual similarity, spatial proximity, and information freshness. Different from existing location-based top-k subscriptions, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription can automatically adjust the triggering condition by taking the triggering score of other subscriptions into account. The group filtering efficacy can be substantially improved by sacrificing the publishing result quality with a bounded guarantee. We conduct extensive experiments on two real datasets to demonstrate the performance of the developed solutions.

  18. Resummation of perturbative QCD by pade approximants

    International Nuclear Information System (INIS)

    Gardi, E.

    1997-01-01

    In this lecture I present some of the new developments concerning the use of Pade Approximants (PA's) for resuming perturbative series in QCD. It is shown that PA's tend to reduce the renormalization scale and scheme dependence as compared to truncated series. In particular it is proven that in the limit where the β function is dominated by the 1-loop contribution, there is an exact symmetry that guarantees invariance of diagonal PA's under changing the renormalization scale. In addition it is shown that in the large β 0 approximation diagonal PA's can be interpreted as a systematic method for approximating the flow of momentum in Feynman diagrams. This corresponds to a new multiple scale generalization of the Brodsky-Lepage-Mackenzie (BLM) method to higher orders. I illustrate the method with the Bjorken sum rule and the vacuum polarization function. (author)

  19. The log-linear return approximation, bubbles, and predictability

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividendprice ratio. Next, we simulate various rational bubbles which have explosive conditional expec...

  20. On root mean square approximation by exponential functions

    OpenAIRE

    Sharipov, Ruslan

    2014-01-01

    The problem of root mean square approximation of a square integrable function by finite linear combinations of exponential functions is considered. It is subdivided into linear and nonlinear parts. The linear approximation problem is solved. Then the nonlinear problem is studied in some particular example.

  1. Approximate estimation of system reliability via fault trees

    International Nuclear Information System (INIS)

    Dutuit, Y.; Rauzy, A.

    2005-01-01

    In this article, we show how fault tree analysis, carried out by means of binary decision diagrams (BDD), is able to approximate reliability of systems made of independent repairable components with a good accuracy and a good efficiency. We consider four algorithms: the Murchland lower bound, the Barlow-Proschan lower bound, the Vesely full approximation and the Vesely asymptotic approximation. For each of these algorithms, we consider an implementation based on the classical minimal cut sets/rare events approach and another one relying on the BDD technology. We present numerical results obtained with both approaches on various examples

  2. Usefulness of bound-state approximations in reaction theory

    International Nuclear Information System (INIS)

    Adhikari, S.K.

    1981-01-01

    A bound-state approximation when applied to certain operators, such as the many-body resolvent operator for a two-body fragmentation channel, in many-body scattering equations, reduces such equations to equivalent two-body scattering equations which are supposed to provide a good description of the underlying physical process. In this paper we test several variants of bound-state approximations in the soluble three-boson Amado model and find that such approximations lead to weak and unacceptable kernels for the equivalent two-body scattering equations and hence to a poor description of the underlying many-body process

  3. Quenched Approximation to ΔS = 1 K Decay

    International Nuclear Information System (INIS)

    Christ, Norman H.

    2005-01-01

    The importance of explicit quark loops in the amplitudes contributing to ΔS = 1, K meson decays raises potential ambiguities when these amplitudes are evaluated in the quenched approximation. Using the factorization of these amplitudes into short- and long-distance parts provided by the standard low-energy effective weak Hamiltonian, we argue that the quenched approximation can be conventionally justified if it is applied to the long-distance portion of each amplitude. The result is a reasonably well-motivated definition of the quenched approximation that is close to that employed in the RBC and CP-PACS calculations of these quantities

  4. Discovering approximate-associated sequence patterns for protein-DNA interactions

    KAUST Repository

    Chan, Tak Ming

    2010-12-30

    Motivation: The bindings between transcription factors (TFs) and transcription factor binding sites (TFBSs) are fundamental protein-DNA interactions in transcriptional regulation. Extensive efforts have been made to better understand the protein-DNA interactions. Recent mining on exact TF-TFBS-associated sequence patterns (rules) has shown great potentials and achieved very promising results. However, exact rules cannot handle variations in real data, resulting in limited informative rules. In this article, we generalize the exact rules to approximate ones for both TFs and TFBSs, which are essential for biological variations. Results: A progressive approach is proposed to address the approximation to alleviate the computational requirements. Firstly, similar TFBSs are grouped from the available TF-TFBS data (TRANSFAC database). Secondly, approximate and highly conserved binding cores are discovered from TF sequences corresponding to each TFBS group. A customized algorithm is developed for the specific objective. We discover the approximate TF-TFBS rules by associating the grouped TFBS consensuses and TF cores. The rules discovered are evaluated by matching (verifying with) the actual protein-DNA binding pairs from Protein Data Bank (PDB) 3D structures. The approximate results exhibit many more verified rules and up to 300% better verification ratios than the exact ones. The customized algorithm achieves over 73% better verification ratios than traditional methods. Approximate rules (64-79%) are shown statistically significant. Detailed variation analysis and conservation verification on NCBI records demonstrate that the approximate rules reveal both the flexible and specific protein-DNA interactions accurately. The approximate TF-TFBS rules discovered show great generalized capability of exploring more informative binding rules. © The Author 2010. Published by Oxford University Press. All rights reserved.

  5. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  6. Approximate convex hull of affine iterated function system attractors

    International Nuclear Information System (INIS)

    Mishkinis, Anton; Gentil, Christian; Lanquetin, Sandrine; Sokolov, Dmitry

    2012-01-01

    Highlights: ► We present an iterative algorithm to approximate affine IFS attractor convex hull. ► Elimination of the interior points significantly reduces the complexity. ► To optimize calculations, we merge the convex hull images at each iteration. ► Approximation by ellipses increases speed of convergence to the exact convex hull. ► We present a method of the output convex hull simplification. - Abstract: In this paper, we present an algorithm to construct an approximate convex hull of the attractors of an affine iterated function system (IFS). We construct a sequence of convex hull approximations for any required precision using the self-similarity property of the attractor in order to optimize calculations. Due to the affine properties of IFS transformations, the number of points considered in the construction is reduced. The time complexity of our algorithm is a linear function of the number of iterations and the number of points in the output approximate convex hull. The number of iterations and the execution time increases logarithmically with increasing accuracy. In addition, we introduce a method to simplify the approximate convex hull without loss of accuracy.

  7. The high intensity approximation applied to multiphoton ionization

    International Nuclear Information System (INIS)

    Brandi, H.S.; Davidovich, L.; Zagury, N.

    1980-08-01

    It is shown that the most commonly used high intensity approximations as applied to ionization by strong electromagnetic fields are related. The applicability of the steepest descent method in these approximations, and the relation between them and first-order perturbation theory, are also discussed. (Author) [pt

  8. The Log-Linear Return Approximation, Bubbles, and Predictability

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    2012-01-01

    We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividend-price ratio. Next, we simulate various rational bubbles which have explosive conditional expe...

  9. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...

  10. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  11. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  12. Performance approximation of pick-to-belt orderpicking systems

    NARCIS (Netherlands)

    M.B.M. de Koster (René)

    1994-01-01

    textabstractIn this paper, an approximation method is discussed for the analysis of pick-to-belt orderpicking systems. The aim of the approximation method is to provide an instrument for obtaining rapid insight in the performance of designs of pick-to-belt orderpicking systems. It can be used to

  13. Modified semiclassical approximation for trapped Bose gases

    International Nuclear Information System (INIS)

    Yukalov, V.I.

    2005-01-01

    A generalization of the semiclassical approximation is suggested allowing for an essential extension of its region of applicability. In particular, it becomes possible to describe Bose-Einstein condensation of a trapped gas in low-dimensional traps and in traps of low confining dimensions, for which the standard semiclassical approximation is not applicable. The result of the modified approach is shown to coincide with purely quantum-mechanical calculations for harmonic traps, including the one-dimensional harmonic trap. The advantage of the semiclassical approximation is in its simplicity and generality. Power-law potentials of arbitrary powers are considered. The effective thermodynamic limit is defined for any confining dimension. The behavior of the specific heat, isothermal compressibility, and density fluctuations is analyzed, with an emphasis on low confining dimensions, where the usual semiclassical method fails. The peculiarities of the thermodynamic characteristics in the effective thermodynamic limit are discussed

  14. On Born approximation in black hole scattering

    Science.gov (United States)

    Batic, D.; Kelkar, N. G.; Nowakowski, M.

    2011-12-01

    A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordström and Reissner-Nordström-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes.

  15. Efficient solution of parabolic equations by Krylov approximation methods

    Science.gov (United States)

    Gallopoulos, E.; Saad, Y.

    1990-01-01

    Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.

  16. Diversity comparison of Pareto front approximations in many-objective optimization.

    Science.gov (United States)

    Li, Miqing; Yang, Shengxiang; Liu, Xiaohui

    2014-12-01

    Diversity assessment of Pareto front approximations is an important issue in the stochastic multiobjective optimization community. Most of the diversity indicators in the literature were designed to work for any number of objectives of Pareto front approximations in principle, but in practice many of these indicators are infeasible or not workable when the number of objectives is large. In this paper, we propose a diversity comparison indicator (DCI) to assess the diversity of Pareto front approximations in many-objective optimization. DCI evaluates relative quality of different Pareto front approximations rather than provides an absolute measure of distribution for a single approximation. In DCI, all the concerned approximations are put into a grid environment so that there are some hyperboxes containing one or more solutions. The proposed indicator only considers the contribution of different approximations to nonempty hyperboxes. Therefore, the computational cost does not increase exponentially with the number of objectives. In fact, the implementation of DCI is of quadratic time complexity, which is fully independent of the number of divisions used in grid. Systematic experiments are conducted using three groups of artificial Pareto front approximations and seven groups of real Pareto front approximations with different numbers of objectives to verify the effectiveness of DCI. Moreover, a comparison with two diversity indicators used widely in many-objective optimization is made analytically and empirically. Finally, a parametric investigation reveals interesting insights of the division number in grid and also offers some suggested settings to the users with different preferences.

  17. Approximating Preemptive Stochastic Scheduling

    OpenAIRE

    Megow Nicole; Vredeveld Tjark

    2009-01-01

    We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...

  18. Precise analytic approximations for the Bessel function J1 (x)

    Science.gov (United States)

    Maass, Fernando; Martin, Pablo

    2018-03-01

    Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.

  19. Continua with microstructure

    CERN Document Server

    Capriz, Gianfranco

    1989-01-01

    This book proposes a new general setting for theories of bodies with microstructure when they are described within the scheme of the con­ tinuum: besides the usual fields of classical thermomechanics (dis­ placement, stress, temperature, etc.) some new fields enter the picture (order parameters, microstress, etc.). The book can be used in a semester course for students who have already followed lectures on the classical theory of continua and is intended as an introduction to special topics: materials with voids, liquid crystals, meromorphic con­ tinua. In fact, the content is essentially that of a series of lectures given in 1986 at the Scuola Estiva di Fisica Matematica in Ravello (Italy). I would like to thank the Scientific Committee of the Gruppo di Fisica Matematica of the Italian National Council of Research (CNR) for the invitation to teach in the School. I also thank the Committee for Mathematics of CNR and the National Science Foundation: they have supported my research over many years and given ...

  20. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  1. Some properties of dual and approximate dual of fusion frames

    OpenAIRE

    Arefijamaal, Ali Akbar; Neyshaburi, Fahimeh Arabyani

    2016-01-01

    In this paper we extend the notion of approximate dual to fusion frames and present some approaches to obtain dual and approximate alternate dual fusion frames. Also, we study the stability of dual and approximate alternate dual fusion frames.

  2. Geometrical-optics approximation of forward scattering by coated particles.

    Science.gov (United States)

    Xu, Feng; Cai, Xiaoshu; Ren, Kuanfang

    2004-03-20

    By means of geometrical optics we present an approximation algorithm with which to accelerate the computation of scattering intensity distribution within a forward angular range (0 degrees-60 degrees) for coated particles illuminated by a collimated incident beam. Phases of emerging rays are exactly calculated to improve the approximation precision. This method proves effective for transparent and tiny absorbent particles with size parameters larger than 75 but fails to give good approximation results at scattering angles at which refractive rays are absent. When the absorption coefficient of a particle is greater than 0.01, the geometrical optics approximation is effective only for forward small angles, typically less than 10 degrees or so.

  3. Inertial parameters in the interacting boson fermion approximation

    International Nuclear Information System (INIS)

    Dukelsky, J.; Lima, C.

    1986-06-01

    The Hartree-Bose-Fermi and the adiabatic approximations are used to derive analytic formulas for the moment of inertia and the decoupling parameter of the interacting boson fermion approximation for deformed systems. These formulas are applied to the SU(3) dynamical symmetry, obtaining perfect agreement with the exact results. (Authors) [pt

  4. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting; Yang, Jingping; Huang, Jianhua Z.

    2011-01-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  5. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting

    2011-03-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  6. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  7. Approximation of Surfaces by Cylinders

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1998-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  8. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    Science.gov (United States)

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  9. PWL approximation of nonlinear dynamical systems, part I: structural stability

    International Nuclear Information System (INIS)

    Storace, M; De Feo, O

    2005-01-01

    This paper and its companion address the problem of the approximation/identification of nonlinear dynamical systems depending on parameters, with a view to their circuit implementation. The proposed method is based on a piecewise-linear approximation technique. In particular, this paper describes the approximation method and applies it to some particularly significant dynamical systems (topological normal forms). The structural stability of the PWL approximations of such systems is investigated through a bifurcation analysis (via continuation methods)

  10. Empirical Rationality in the Stock Market

    DEFF Research Database (Denmark)

    Raahauge, Peter

    2003-01-01

    . Theequilibrium asset pricing function is seriously affected by the existence of approximationerrors and the descriptive properties and normative implicationsof the model are significantly improved. This suggests that investors do not| and should not | ignore approximation errors.Keywords: Approximation errors...

  11. Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs

    Science.gov (United States)

    RIngenburg, Michael F.

    Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in

  12. An improved corrective smoothed particle method approximation for second‐order derivatives

    NARCIS (Netherlands)

    Korzilius, S.P.; Schilders, W.H.A.; Anthonissen, M.J.H.

    2013-01-01

    To solve (partial) differential equations it is necessary to have good numerical approximations. In SPH, most approximations suffer from the presence of boundaries. In this work a new approximation for the second-order derivative is derived and numerically compared with two other approximation

  13. Blind sensor calibration using approximate message passing

    International Nuclear Information System (INIS)

    Schülke, Christophe; Caltagirone, Francesco; Zdeborová, Lenka

    2015-01-01

    The ubiquity of approximately sparse data has led a variety of communities to take great interest in compressed sensing algorithms. Although these are very successful and well understood for linear measurements with additive noise, applying them to real data can be problematic if imperfect sensing devices introduce deviations from this ideal signal acquisition process, caused by sensor decalibration or failure. We propose a message passing algorithm called calibration approximate message passing (Cal-AMP) that can treat a variety of such sensor-induced imperfections. In addition to deriving the general form of the algorithm, we numerically investigate two particular settings. In the first, a fraction of the sensors is faulty, giving readings unrelated to the signal. In the second, sensors are decalibrated and each one introduces a different multiplicative gain to the measurements. Cal-AMP shares the scalability of approximate message passing, allowing us to treat large sized instances of these problems, and experimentally exhibits a phase transition between domains of success and failure. (paper)

  14. The binary collision approximation: Background and introduction

    International Nuclear Information System (INIS)

    Robinson, M.T.

    1992-08-01

    The binary collision approximation (BCA) has long been used in computer simulations of the interactions of energetic atoms with solid targets, as well as being the basis of most analytical theory in this area. While mainly a high-energy approximation, the BCA retains qualitative significance at low energies and, with proper formulation, gives useful quantitative information as well. Moreover, computer simulations based on the BCA can achieve good statistics in many situations where those based on full classical dynamical models require the most advanced computer hardware or are even impracticable. The foundations of the BCA in classical scattering are reviewed, including methods of evaluating the scattering integrals, interaction potentials, and electron excitation effects. The explicit evaluation of time at significant points on particle trajectories is discussed, as are scheduling algorithms for ordering the collisions in a developing cascade. An approximate treatment of nearly simultaneous collisions is outlined and the searching algorithms used in MARLOWE are presented

  15. An inductive algorithm for smooth approximation of functions

    International Nuclear Information System (INIS)

    Kupenova, T.N.

    2011-01-01

    An inductive algorithm is presented for smooth approximation of functions, based on the Tikhonov regularization method and applied to a specific kind of the Tikhonov parametric functional. The discrepancy principle is used for estimation of the regularization parameter. The principle of heuristic self-organization is applied for assessment of some parameters of the approximating function

  16. Gauge-invariant intense-field approximations to all orders

    International Nuclear Information System (INIS)

    Faisal, F H M

    2007-01-01

    We present a gauge-invariant formulation of the so-called strong-field KFR approximations in the 'velocity' and 'length' gauges and demonstrate their equivalence in all orders. The theory thus overcomes a longstanding discrepancy between the strong-field velocity and the length-gauge approximations for non-perturbative processes in intense laser fields. (fast track communication)

  17. On the convergence of multigroup discrete-ordinates approximations

    International Nuclear Information System (INIS)

    Victory, H.D. Jr.; Allen, E.J.; Ganguly, K.

    1987-01-01

    Our analysis is divided into two distinct parts which we label for convenience as Part A and Part B. In Part A, we demonstrate that the multigroup discrete-ordinates approximations are well-defined and converge to the exact transport solution in any subcritical setting. For the most part, we focus on transport in two-dimensional Cartesian geometry. A Nystroem technique is used to extend the discrete ordinates multigroup approximates to all values of the angular and energy variables. Such an extension enables us to employ collectively compact operator theory to deduce stability and convergence of the approximates. In Part B, we perform a thorough convergence analysis for the multigroup discrete-ordinates method for an anisotropically-scattering subcritical medium in slab geometry. The diamond-difference and step-characteristic spatial approximation methods are each studied. The multigroup neutron fluxes are shown to converge in a Banach space setting under realistic smoothness conditions on the solution. This is the first thorough convergence analysis for the fully-discretized multigroup neutron transport equations

  18. Approximation theorems by Meyer-Koenig and Zeller type operators

    International Nuclear Information System (INIS)

    Ali Ozarslan, M.; Duman, Oktay

    2009-01-01

    This paper is mainly connected with the approximation properties of Meyer-Koenig and Zeller (MKZ) type operators. We first introduce a general sequence of MKZ operators based on q-integers and then obtain a Korovkin-type approximation theorem for these operators. We also compute their rates of convergence by means of modulus of continuity and the elements of Lipschitz class functionals. Furthermore, we give an rth order generalization of our operators in order to get some explicit approximation results.

  19. Space-angle approximations in the variational nodal method

    International Nuclear Information System (INIS)

    Lewis, E. E.; Palmiotti, G.; Taiwo, T.

    1999-01-01

    The variational nodal method is formulated such that the angular and spatial approximations maybe examined separately. Spherical harmonic, simplified spherical harmonic, and discrete ordinate approximations are coupled to the primal hybrid finite element treatment of the spatial variables. Within this framework, two classes of spatial trial functions are presented: (1) orthogonal polynomials for the treatment of homogeneous nodes and (2) bilinear finite subelement trial functions for the treatment of fuel assembly sized nodes in which fuel-pin cell cross sections are represented explicitly. Polynomial and subelement trial functions are applied to benchmark water-reactor problems containing MOX fuel using spherical harmonic and simplified spherical harmonic approximations. The resulting accuracy and computing costs are compared

  20. Subquadratic medial-axis approximation in $\\mathbb{R}^3$

    Directory of Open Access Journals (Sweden)

    Christian Scheffer

    2015-09-01

    Full Text Available We present an algorithm that approximates the medial axis of a smooth manifold in $\\mathbb{R}^3$ which is given by a sufficiently dense point sample. The resulting, non-discrete approximation is shown to converge to the medial axis as the sampling density approaches infinity. While all previous algorithms guaranteeing convergence have a running time quadratic in the size $n$ of the point sample, we achieve a running time of at most $\\mathcal{O}(n\\log^3 n$. While there is no subquadratic upper bound on the output complexity of previous algorithms for non-discrete medial axis approximation, the output of our algorithm is guaranteed to be of linear size.

  1. Merging Belief Propagation and the Mean Field Approximation

    DEFF Research Database (Denmark)

    Riegler, Erwin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro

    2010-01-01

    We present a joint message passing approach that combines belief propagation and the mean field approximation. Our analysis is based on the region-based free energy approximation method proposed by Yedidia et al., which allows to use the same objective function (Kullback-Leibler divergence......) as a starting point. In this method message passing fixed point equations (which correspond to the update rules in a message passing algorithm) are then obtained by imposing different region-based approximations and constraints on the mean field and belief propagation parts of the corresponding factor graph....... Our results can be applied, for example, to algorithms that perform joint channel estimation and decoding in iterative receivers. This is demonstrated in a simple example....

  2. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    Science.gov (United States)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  3. APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2013-03-01

    Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.

  4. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sørbye, Sigrunn H.

    2017-09-18

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

  5. Sharp Bounds for Symmetric and Asymmetric Diophantine Approximation

    Institute of Scientific and Technical Information of China (English)

    Cornelis KRAAIKAMP; Ionica SMEETS

    2011-01-01

    In 2004,Tong found bounds for the approximation quality of a regular continued fraction convergent to a rational number,expressed in bounds for both the previous and next approximation.The authors sharpen his results with a geometric method and give both sharp upper and lower bounds.The asymptotic frequencies that these bounds occur are also calculated.

  6. evaluation of approximate design procedures for biaxially loaded

    African Journals Online (AJOL)

    The approximation according to the ACI is based on the work by Parme [9] who chose to approximate a as a logarithmic function 9f a parameter /3 representing an actual point on· the non-dimensional load contour, where the two moment components, . related to the respective uniaxial capacities are equal,. i.e. f3=;: my lmuy ...

  7. Effective medium super-cell approximation for interacting disordered systems: an alternative real-space derivation of generalized dynamical cluster approximation

    International Nuclear Information System (INIS)

    Moradian, Rostam

    2006-01-01

    We develop a generalized real-space effective medium super-cell approximation (EMSCA) method to treat the electronic states of interacting disordered systems. This method is general and allows randomness both in the on-site energies and in the hopping integrals. For a non-interacting disordered system, in the special case of randomness in the on-site energies, this method is equivalent to the non-local coherent potential approximation (NLCPA) derived previously. Also, for an interacting system the EMSCA method leads to the real-space derivation of the generalized dynamical cluster approximation (DCA) for a general lattice structure. We found that the original DCA and the NLCPA are two simple cases of this technique, so the EMSCA is equivalent to the generalized DCA where there is included interaction and randomness in the on-site energies and in the hopping integrals. All of the equations of this formalism are derived by using the effective medium theory in real space

  8. Perceptions of a fluid consensus: uniqueness bias, false consensus, false polarization, and pluralistic ignorance in a water conservation crisis.

    Science.gov (United States)

    Monin, Benoît; Norton, Michael I

    2003-05-01

    A 5-day field study (N = 415) during and right after a shower ban demonstrated multifaceted social projection and the tendency to draw personality inferences from simple behavior in a time of drastic consensus change. Bathers thought showering was more prevalent than did non-bathers (false consensus) and respondents consistently underestimated the prevalence of the desirable and common behavior--be it not showering during the shower ban or showering after the ban (uniqueness bias). Participants thought that bathers and non-bathers during the ban differed greatly in their general concern for the community, but self-reports demonstrated that this gap was illusory (false polarization). Finally, bathers thought other bathers cared less than they did, whereas non-bathers thought other non-bathers cared more than they did (pluralistic ignorance). The study captures the many biases at work in social perception in a time of social change.

  9. Can Propensity Score Analysis Approximate Randomized Experiments Using Pretest and Demographic Information in Pre-K Intervention Research?

    Science.gov (United States)

    Dong, Nianbo; Lipsey, Mark W

    2017-01-01

    It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.

  10. Faster and Simpler Approximation of Stable Matchings

    Directory of Open Access Journals (Sweden)

    Katarzyna Paluch

    2014-04-01

    Full Text Available We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.

  11. On the dipole approximation with error estimates

    Science.gov (United States)

    Boßmann, Lea; Grummt, Robert; Kolb, Martin

    2018-01-01

    The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.

  12. Passionate ignorance

    DEFF Research Database (Denmark)

    Hyldgaard, Kirsten

    2006-01-01

    Psychoanalysis has nothing to say about education. Psychoanalysis has something to say about pedagogy; psychoanalysis has pedagogical-philosophical implications. Pedagogy, in distinction to education, addresses the question of the subject. This implies that pedagogical theory is and cannot be a s...

  13. Fatal ignorance.

    Science.gov (United States)

    1996-01-01

    The Rajiv Gandhi Foundation (RGF), together with the AIMS-affiliated NGO AIDS Cell, Delhi, held a workshop as part of an effort to raise a 90-doctor RGF AIDS workforce which will work together with nongovernmental organizations on AIDS prevention, control, and management. 25 general practitioners registered with the Indian Medical Council, who have practiced medicine in Delhi for the past 10-20 years, responded to a pre-program questionnaire on HIV-related knowledge and attitudes. 6 out of the 25 physicians did not know what the acronym AIDS stands for, extremely low awareness of the clinical aspects of the disease was revealed, 9 believed in the conspiracy theory of HIV development and accidental release by the US Central Intelligence Agency, 8 believed that AIDS is a problem of only the promiscuous, 18 did not know that the mode of HIV transmission is similar to that of the hepatitis B virus, 12 were unaware that HIV-infected people will test HIV-seronegative during the first three months after initial infection and that they will develop symptoms of full-blown AIDS only after 10 years, 10 did not know the name of even one drug used to treat the disease, 3 believed aspirin to be an effective drug against AIDS, many believed fantastic theories about the modes of HIV transmission, and many were acutely homophobic. Efforts were made to clear misconceptions about HIV during the workshop. It is hoped that participating doctors' attitudes about AIDS and the high-risk groups affected by it were also improved.

  14. On approximation of Lie groups by discrete subgroups

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... The notion of approximation of Lie groups by discrete subgroups was introduced by Tôyama in Kodai Math. Sem. Rep. 1 (1949) 36–37 and investigated in detail by Kuranishi in Nagoya Math. J. 2 (1951) 63–71. It is known as a theorem of Tôyama that any connected Lie group approximated by discrete ...

  15. A simple approximation method for dilute Ising systems

    International Nuclear Information System (INIS)

    Saber, M.

    1996-10-01

    We describe a simple approximate method to analyze dilute Ising systems. The method takes into consideration the fluctuations of the effective field, and is based on a probability distribution of random variables which correctly accounts for all the single site kinematic relations. It is shown that the simplest approximation gives satisfactory results when compared with other methods. (author). 12 refs, 2 tabs

  16. The modified signed likelihood statistic and saddlepoint approximations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1992-01-01

    SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....

  17. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  18. Nonresonant approximations to the optical potential

    International Nuclear Information System (INIS)

    Kowalski, K.L.

    1982-01-01

    A new class of approximations to the optical potential, which includes those of the multiple-scattering variety, is investigated. These approximations are constructed so that the optical potential maintains the correct unitarity properties along with a proper treatment of nucleon identity. The special case of nucleon-nucleus scattering with complete inclusion of Pauli effects is studied in detail. The treatment is such that the optical potential receives contributions only from subsystems embedded in their own physically correct antisymmetrized subspaces. It is found that a systematic development of even the lowest-order approximations requires the use of the off-shell extension due to Alt, Grassberger, and Sandhas along with a consistent set of dynamical equations for the optical potential. In nucleon-nucleus scattering a lowest-order optical potential is obtained as part of a systematic, exact, inclusive connectivity expansion which is expected to be useful at moderately high energies. This lowest-order potential consists of an energy-shifted (trho)-type term with three-body kinematics plus a heavy-particle exchange or pickup term. The natural appearance of the exchange term additivity in the optical potential clarifies the role of the elastic distortion in connection with the treatment of these processes. The relationship of the relevant aspects of the present analysis of the optical potential to conventional multiple scattering methods is discussed

  19. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  20. Semiclassical approximation to time-dependent Hartree--Fock theory

    International Nuclear Information System (INIS)

    Dworzecka, M.; Poggioli, R.

    1976-01-01

    Working within a time-dependent Hartree-Fock framework, one develops a semiclassical approximation appropriate for large systems. It is demonstrated that the standard semiclassical approach, the Thomas-Fermi approximation, is inconsistent with Hartree-Fock theory when the basic two-body interaction is short-ranged (as in nuclear systems, for example). However, by introducing a simple extension of the Thomas-Fermi approximation, one overcomes this problem. One also discusses the infinite nuclear matter problem and point out that time-dependent Hartree-Fock theory yields collective modes of the zero sound variety instead of ordinary hydrodynamic (first) sound. One thus emphasizes that one should be extremely circumspect when attempting to cast the equations of motion of time-dependent Hartree-Fock theory into a hydrodynamic-like form

  1. Good and Bad Neighborhood Approximations for Outlier Detection Ensembles

    DEFF Research Database (Denmark)

    Kirner, Evelyn; Schubert, Erich; Zimek, Arthur

    2017-01-01

    Outlier detection methods have used approximate neighborhoods in filter-refinement approaches. Outlier detection ensembles have used artificially obfuscated neighborhoods to achieve diverse ensemble members. Here we argue that outlier detection models could be based on approximate neighborhoods...... in the first place, thus gaining in both efficiency and effectiveness. It depends, however, on the type of approximation, as only some seem beneficial for the task of outlier detection, while no (large) benefit can be seen for others. In particular, we argue that space-filling curves are beneficial...

  2. APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION

    Directory of Open Access Journals (Sweden)

    Mădălina Roxana Buneci

    2016-12-01

    Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere

  3. The complex variable boundary element method: Applications in determining approximative boundaries

    Science.gov (United States)

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  4. Gaussian and 1/N approximations in semiclassical cosmology

    International Nuclear Information System (INIS)

    Mazzitelli, F.D.; Paz, J.P.

    1989-01-01

    We study the λphi 4 theory and the interacting O(N) model in a curved background using the Gaussian approximation for the former and the large-N approximation for the latter. We obtain the renormalized version of the semiclassical Einstein equations having in mind a future application of these models to investigate the physics of the very early Universe. We show that, while the Gaussian approximation has two different phases, in the large-N limit only one is present. The different features of the two phases are analyzed at the level of the effective field equations. We discuss the initial-value problem and find the initial conditions that make the theory renormalizable. As an example, we study the de Sitter self-consistent solutions of the semiclassical Einstein equations. Finally, for an identically zero mean value of the field we find the evolution equations for the classical field Ω(x) = (λ 2 >)/sup 1/2/ and the spacetime metric. They are very similar to the ones obtained by replacing the classical potential by the one-loop effective potential in the classical equations but do not have the drawbacks of the one-loop approximation

  5. Approximate Matching of Hierarchial Data

    DEFF Research Database (Denmark)

    Augsten, Nikolaus

    -grams of a tree are all its subtrees of a particular shape. Intuitively, two trees are similar if they have many pq-grams in common. The pq-gram distance is an efficient and effective approximation of the tree edit distance. We analyze the properties of the pq-gram distance and compare it with the tree edit...

  6. Pythagorean Approximations and Continued Fractions

    Science.gov (United States)

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  7. An Origami Approximation to the Cosmic Web

    Science.gov (United States)

    Neyrinck, Mark C.

    2016-10-01

    The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.

  8. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  9. Approximate particle number projection in hot nuclei

    International Nuclear Information System (INIS)

    Kosov, D.S.; Vdovin, A.I.

    1995-01-01

    Heated finite systems like, e.g., hot atomic nuclei have to be described by the canonical partition function. But this is a quite difficult technical problem and, as a rule, the grand canonical partition function is used in the studies. As a result, some shortcomings of the theoretical description appear because of the thermal fluctuations of the number of particles. Moreover, in nuclei with pairing correlations the quantum number fluctuations are introduced by some approximate methods (e.g., by the standard BCS method). The exact particle number projection is very cumbersome and an approximate number projection method for T ≠ 0 basing on the formalism of thermo field dynamics is proposed. The idea of the Lipkin-Nogami method to perform any operator as a series in the number operator powers is used. The system of equations for the coefficients of this expansion is written and the solution of the system in the next approximation after the BCS one is obtained. The method which is of the 'projection after variation' type is applied to a degenerate single j-shell model. 14 refs., 1 tab

  10. Finite elements and approximation

    CERN Document Server

    Zienkiewicz, O C

    2006-01-01

    A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o

  11. An approximation to the interference term using Frobenius Method

    Energy Technology Data Exchange (ETDEWEB)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C. da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mail: aquilino@lmp.ufrj.br

    2007-07-01

    An analytical approximation of the interference term {chi}(x,{xi}) is proposed. The approximation is based on the differential equation to {chi}(x,{xi}) using the Frobenius method and the parameter variation. The analytical expression of the {chi}(x,{xi}) obtained in terms of the elementary functions is very simple and precise. In this work one applies the approximations to the Doppler broadening functions and to the interference term in determining the neutron cross sections. Results were validated for the resonances of the U{sup 238} isotope for different energies and temperature ranges. (author)

  12. An approximation to the interference term using Frobenius Method

    International Nuclear Information System (INIS)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C. da

    2007-01-01

    An analytical approximation of the interference term χ(x,ξ) is proposed. The approximation is based on the differential equation to χ(x,ξ) using the Frobenius method and the parameter variation. The analytical expression of the χ(x,ξ) obtained in terms of the elementary functions is very simple and precise. In this work one applies the approximations to the Doppler broadening functions and to the interference term in determining the neutron cross sections. Results were validated for the resonances of the U 238 isotope for different energies and temperature ranges. (author)

  13. The mathematical structure of the approximate linear response relation

    International Nuclear Information System (INIS)

    Yasuda, Muneki; Tanaka, Kazuyuki

    2007-01-01

    In this paper, we study the mathematical structures of the linear response relation based on Plefka's expansion and the cluster variation method in terms of the perturbation expansion, and we show how this linear response relation approximates the correlation functions of the specified system. Moreover, by comparing the perturbation expansions of the correlation functions estimated by the linear response relation based on these approximation methods with exact perturbative forms of the correlation functions, we are able to explain why the approximate techniques using the linear response relation work well

  14. Efficient approximation of random fields for numerical applications

    KAUST Repository

    Harbrecht, Helmut; Peters, Michael; Siebenmorgen, Markus

    2015-01-01

    We consider the rapid computation of separable expansions for the approximation of random fields. We compare approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. We provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples validate and quantify the considered methods.

  15. Efficient approximation of random fields for numerical applications

    KAUST Repository

    Harbrecht, Helmut

    2015-01-07

    We consider the rapid computation of separable expansions for the approximation of random fields. We compare approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. We provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples validate and quantify the considered methods.

  16. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  17. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  18. 21CMMC with a 3D light-cone: the impact of the co-evolution approximation on the astrophysics of reionisation and cosmic dawn.

    Science.gov (United States)

    Greig, Bradley; Mesinger, Andrei

    2018-03-01

    We extend 21CMMC, a Monte Carlo Markov Chain sampler of 3D reionisation simulations, to perform parameter estimation directly on 3D light-cones of the cosmic 21cm signal. This brings theoretical analysis closer to the tomographic 21-cm observations achievable with next generation interferometers like HERA and the SKA. Parameter recovery can therefore account for modes which evolve with redshift/frequency. Additionally, simulated data can be more easily corrupted to resemble real data. Using the light-cone version of 21CMMC, we quantify the biases in the recovered astrophysical parameters if we use the 21cm power spectrum from the co-evolution approximation to fit a 3D light-cone mock observation. While ignoring the light-cone effect under most assumptions will not significantly bias the recovered astrophysical parameters, it can lead to an underestimation of the associated uncertainty. However significant biases (˜few - 10 σ) can occur if the 21cm signal evolves rapidly (i.e. the epochs of reionisation and heating overlap significantly) and: (i) foreground removal is very efficient, allowing large physical scales (k {≲} 0.1 Mpc-1) to be used in the analysis or (ii) theoretical modelling is accurate to within ˜10 per cent in the power spectrum amplitude.

  19. Direct application of Padé approximant for solving nonlinear differential equations.

    Science.gov (United States)

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  20. Continuum orbital approximations in weak-coupling theories for inelastic electron scattering

    International Nuclear Information System (INIS)

    Peek, J.M.; Mann, J.B.

    1977-01-01

    Two approximations, motivated by heavy-particle scattering theory, are tested for weak-coupling electron-atom (ion) inelastic scattering theory. They consist of replacing the one-electron scattering orbitals by their Langer uniform approximations and the use of an average trajectory approximation which entirely avoids the necessity for generating continuum orbitals. Numerical tests for a dipole-allowed and a dipole-forbidden event, based on Coulomb-Born theory with exchange neglected, reveal the error trends. It is concluded that the uniform approximation gives a satisfactory prediction for traditional weak-coupling theories while the average approximation should be limited to collision energies exceeding at least twice the threshold energy. The accuracy for both approximations is higher for positive ions than for neutral targets. Partial-wave collision-strength data indicate that greater care should be exercised in using these approximations to predict quantities differential in the scattering angle. An application to the 2s 2 S-2p 2 P transition in Ne VIII is presented

  1. Self-consistent approximations beyond the CPA: Part II

    International Nuclear Information System (INIS)

    Kaplan, T.; Gray, L.J.

    1982-01-01

    This paper concentrates on a self-consistent approximation for random alloys developed by Kaplan, Leath, Gray, and Diehl. The construction of the augmented space formalism for a binary alloy is sketched, and the notation to be used derived. Using the operator methods of the augmented space, the self-consistent approximation is derived for the average Green's function, and for evaluating the self-energy, taking into account the scattering by clusters of excitations. The particular cluster approximation desired is derived by treating the scattering by the excitations with S /SUB T/ exactly. Fourier transforms on the disorder-space clustersite labels solve the self-consistent set of equations. Expansion to short range order in the alloy is also discussed. A method to reduce the problem to a computationally tractable form is described

  2. Perturbation expansions generated by an approximate propagator

    International Nuclear Information System (INIS)

    Znojil, M.

    1987-01-01

    Starting from a knowledge of an approximate propagator R at some trial energy guess E 0 , a new perturbative prescription for p-plet of bound states and of their energies is proposed. It generalizes the Rayleigh-Schroedinger (RS) degenerate perturbation theory to the nondiagonal operators R (eliminates a RS need of their diagnolisation) and defines an approximate Hamiltonian T by mere inversion. The deviation V of T from the exact Hamiltonian H is assumed small only after a substraction of a further auxiliary Hartree-Fock-like separable ''selfconsistent'' potential U of rank p. The convergence is illustrated numerically on the anharmonic oscillator example

  3. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    Science.gov (United States)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  4. Tobacco Usage in Uttarakhand: A Dangerous Combination of High Prevalence, Widespread Ignorance, and Resistance to Quitting

    Directory of Open Access Journals (Sweden)

    Nathan John Grills

    2015-01-01

    Full Text Available Background. Nearly one-third of adults in India use tobacco, resulting in 1.2 million deaths. However, little is known about knowledge, attitudes, and practices (KAP related to smoking in the impoverished state of Uttarakhand. Methods. A cross-sectional epidemiological prevalence survey was undertaken. Multistage cluster sampling selected 20 villages and 50 households to survey from which 1853 people were interviewed. Tobacco prevalence and KAP were analyzed by income level, occupation, age, and sex. 95% confidence intervals were calculated using standard formulas and incorporating assumptions in relation to the clustering effect. Results. The overall prevalence of tobacco usage, defined using WHO criteria, was 38.9%. 93% of smokers and 86% of tobacco chewers were male. Prevalence of tobacco use, controlling for other factors, was associated with lower education, older age, and male sex. 97.6% of users and 98.1% of nonusers wanted less tobacco. Except for lung cancer (89% awareness, awareness of diseases caused by tobacco usage was low (cardiac: 67%; infertility: 32.5%; stroke: 40.5%. Conclusion. A dangerous combination of high tobacco usage prevalence, ignorance about its dangers, and few quit attempts being made suggests the need to develop effective and evidence based interventions to prevent a health and development disaster in Uttarakhand.

  5. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    , obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose

  6. Approximating the ground state of gapped quantum spin systems

    Energy Technology Data Exchange (ETDEWEB)

    Michalakis, Spyridon [Los Alamos National Laboratory; Hamza, Eman [NON LANL; Nachtergaele, Bruno [NON LANL; Sims, Robert [NON LANL

    2009-01-01

    We consider quantum spin systems defined on finite sets V equipped with a metric. In typical examples, V is a large, but finite subset of Z{sup d}. For finite range Hamiltonians with uniformly bounded interaction terms and a unique, gapped ground state, we demonstrate a locality property of the corresponding ground state projector. In such systems, this ground state projector can be approximated by the product of observables with quantifiable supports. In fact, given any subset {chi} {contained_in} V the ground state projector can be approximated by the product of two projections, one supported on {chi} and one supported on {chi}{sup c}, and a bounded observable supported on a boundary region in such a way that as the boundary region increases, the approximation becomes better. Such an approximation was useful in proving an area law in one dimension, and this result corresponds to a multi-dimensional analogue.

  7. Polynomial approximation on polytopes

    CERN Document Server

    Totik, Vilmos

    2014-01-01

    Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.

  8. Ignoring detailed fast-changing dynamics of land use overestimates regional terrestrial carbon sequestration

    Directory of Open Access Journals (Sweden)

    S. Q. Zhao

    2009-08-01

    Full Text Available Land use change is critical in determining the distribution, magnitude and mechanisms of terrestrial carbon budgets at the local to global scales. To date, almost all regional to global carbon cycle studies are driven by a static land use map or land use change statistics with decadal time intervals. The biases in quantifying carbon exchange between the terrestrial ecosystems and the atmosphere caused by using such land use change information have not been investigated. Here, we used the General Ensemble biogeochemical Modeling System (GEMS, along with consistent and spatially explicit land use change scenarios with different intervals (1 yr, 5 yrs, 10 yrs and static, respectively, to evaluate the impacts of land use change data frequency on estimating regional carbon sequestration in the southeastern United States. Our results indicate that ignoring the detailed fast-changing dynamics of land use can lead to a significant overestimation of carbon uptake by the terrestrial ecosystem. Regional carbon sequestration increased from 0.27 to 0.69, 0.80 and 0.97 Mg C ha−1 yr−1 when land use change data frequency shifting from 1 year to 5 years, 10 years interval and static land use information, respectively. Carbon removal by forest harvesting and prolonged cumulative impacts of historical land use change on carbon cycle accounted for the differences in carbon sequestration between static and dynamic land use change scenarios. The results suggest that it is critical to incorporate the detailed dynamics of land use change into local to global carbon cycle studies. Otherwise, it is impossible to accurately quantify the geographic distributions, magnitudes, and mechanisms of terrestrial carbon sequestration at the local to global scales.

  9. Discussion of CoSA: Clustering of Sparse Approximations

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Derek Elswick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-07

    The purpose of this talk is to discuss the possible applications of CoSA (Clustering of Sparse Approximations) to the exploitation of HSI (HyperSpectral Imagery) data. CoSA is presented by Moody et al. in the Journal of Applied Remote Sensing (“Land cover classification in multispectral imagery using clustering of sparse approximations over learned feature dictionaries”, Vol. 8, 2014) and is based on machine learning techniques.

  10. Approximate reasoning in decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M M; Sanchez, E

    1982-01-01

    The volume aims to incorporate the recent advances in both theory and applications. It contains 44 articles by 74 contributors from 17 different countries. The topics considered include: membership functions; composite fuzzy relations; fuzzy logic and inference; classifications and similarity measures; expert systems and medical diagnosis; psychological measurements and human behaviour; approximate reasoning and decision analysis; and fuzzy clustering algorithms.

  11. Green-Ampt approximations: A comprehensive analysis

    Science.gov (United States)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  12. Nonlinear Ritz approximation for Fredholm functionals

    Directory of Open Access Journals (Sweden)

    Mudhir A. Abdul Hussain

    2015-11-01

    Full Text Available In this article we use the modify Lyapunov-Schmidt reduction to find nonlinear Ritz approximation for a Fredholm functional. This functional corresponds to a nonlinear Fredholm operator defined by a nonlinear fourth-order differential equation.

  13. An overview on Approximate Bayesian computation*

    Directory of Open Access Journals (Sweden)

    Baragatti Meïli

    2014-01-01

    Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.

  14. msBayes: Pipeline for testing comparative phylogeographic histories using hierarchical approximate Bayesian computation

    Directory of Open Access Journals (Sweden)

    Takebayashi Naoki

    2007-07-01

    Full Text Available Abstract Background Although testing for simultaneous divergence (vicariance across different population-pairs that span the same barrier to gene flow is of central importance to evolutionary biology, researchers often equate the gene tree and population/species tree thereby ignoring stochastic coalescent variance in their conclusions of temporal incongruence. In contrast to other available phylogeographic software packages, msBayes is the only one that analyses data from multiple species/population pairs under a hierarchical model. Results msBayes employs approximate Bayesian computation (ABC under a hierarchical coalescent model to test for simultaneous divergence (TSD in multiple co-distributed population-pairs. Simultaneous isolation is tested by estimating three hyper-parameters that characterize the degree of variability in divergence times across co-distributed population pairs while allowing for variation in various within population-pair demographic parameters (sub-parameters that can affect the coalescent. msBayes is a software package consisting of several C and R programs that are run with a Perl "front-end". Conclusion The method reasonably distinguishes simultaneous isolation from temporal incongruence in the divergence of co-distributed population pairs, even with sparse sampling of individuals. Because the estimate step is decoupled from the simulation step, one can rapidly evaluate different ABC acceptance/rejection conditions and the choice of summary statistics. Given the complex and idiosyncratic nature of testing multi-species biogeographic hypotheses, we envision msBayes as a powerful and flexible tool for tackling a wide array of difficult research questions that use population genetic data from multiple co-distributed species. The msBayes pipeline is available for download at http://msbayes.sourceforge.net/ under an open source license (GNU Public License. The msBayes pipeline is comprised of several C and R programs that

  15. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    International Nuclear Information System (INIS)

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-01-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N 4 ). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S ^2 〉 are also developed and tested

  16. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Degao; Yang, Yang; Zhang, Peng [Department of Chemistry, Duke University, Durham, North Carolina 27708 (United States); Yang, Weitao, E-mail: weitao.yang@duke.edu [Department of Chemistry and Department of Physics, Duke University, Durham, North Carolina 27708 (United States)

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

  17. Homicides in the major municipality of Sonsonate. 1786 1820: An approximation to the motivations for violence

    Directory of Open Access Journals (Sweden)

    Josselin Ivettee Linares Acevedo

    2014-05-01

    Full Text Available Homicide is a highly discussed topic nowadays in El Salvador. However, a little known fact is that this crime has been present throughout the country’s history. Noteworthy are the considerable number of cases of homicide reported at Sonsonate City Hall at the end of the colonial period. Homicide took place when, on occasion, people’s behavior was altered for different reasons, giving rise to violent attitudes, which came about when individuals felt this act was deserved. The motivations that lead to this conduct were mostly closely related to drunkenness, as a direct cause for homicide. Honor, passion, and self-defense are some of the most outstanding motivations to commit the crime. It cannot be ignored that some crimes were due to debt, and others took place within households – husbands murdering their own wives – and there were those that had no motivation: accidents. This crime was considered one of the most serious, and shook society at the time. Indigenous people were considered the most prone to committing homicide, given that authorities attributed ignorance to being the key factor in generating this sort of attitude.DOI: http://dx.doi.org/10.5377/rpsp.v1i1.1393

  18. Comment on 'Approximation for a large-angle simple pendulum period'

    International Nuclear Information System (INIS)

    Yuan Qingxin; Ding Pei

    2009-01-01

    In a recent letter, Belendez et al (2009 Eur. J. Phys. 30 L25-8) proposed an alternative of approximation for the period of a simple pendulum suggested earlier by Hite (2005 Phys. Teach. 43 290-2) who set out to improve on the Kidd and Fogg formula (2002 Phys. Teach. 40 81-3). As a response to the approximation scheme, we obtain another analytical approximation for the large-angle pendulum period, which owns the simplicity and accuracy in evaluating the exact period, and moreover, for amplitudes less than 144 deg. the analytical approximate expression is more accurate than others in the literature. (letters and comments)

  19. Reply to Steele & Ferrer: Modeling Oscillation, Approximately or Exactly?

    Science.gov (United States)

    Oud, Johan H. L.; Folmer, Henk

    2011-01-01

    This article addresses modeling oscillation in continuous time. It criticizes Steele and Ferrer's article "Latent Differential Equation Modeling of Self-Regulatory and Coregulatory Affective Processes" (2011), particularly the approximate estimation procedure applied. This procedure is the latent version of the local linear approximation procedure…

  20. Approximations for Markovian multi-class queues with preemptive priorities

    NARCIS (Netherlands)

    van der Heijden, Matthijs C.; van Harten, Aart; Sleptchenko, Andrei

    2004-01-01

    We discuss the approximation of performance measures in multi-class M/M/k queues with preemptive priorities for large problem instances (many classes and servers) using class aggregation and server reduction. We compared our approximations to exact and simulation results and found that our approach

  1. Approximations for W-Pair Production at Linear-Collider Energies

    CERN Document Server

    Denner, A

    1997-01-01

    We determine the accuracy of various approximations to the O(alpha) corrections for on-shell W-pair production. While an approximation based on the universal corrections arising from initial-state radiation, from the running of alpha, and from corrections proportional to m_t^2 fails in the Linear-Collider energy range, a high-energy approximation improved by the exact universal corrections is sufficiently good above about 500GeV. These results indicate that in Monte Carlo event generators for off-shell W-pair production the incorporation of the universal corrections is not sufficient and more corrections should be included.

  2. Applicability of point-dipoles approximation to all-dielectric metamaterials

    DEFF Research Database (Denmark)

    Kuznetsova, S. M.; Andryieuski, Andrei; Lavrinenko, Andrei

    2015-01-01

    All-dielectric metamaterials consisting of high-dielectric inclusions in a low-dielectric matrix are considered as a low-loss alternative to resonant metal-based metamaterials. In this paper we investigate the applicability of the point electric and magnetic dipoles approximation to dielectric meta......-atoms on the example of a dielectric ring metamaterial. Despite the large electrical size of high-dielectric meta-atoms, the dipole approximation allows for accurate prediction of the metamaterials properties for the rings with diameters up to approximate to 0.8 of the lattice constant. The results provide important...... guidelines for design and optimization of all-dielectric metamaterials....

  3. Globally convergent optimization algorithm using conservative convex separable diagonal quadratic approximations

    NARCIS (Netherlands)

    Groenwold, A.A.; Wood, D.W.; Etman, L.F.P.; Tosserams, S.

    2009-01-01

    We implement and test a globally convergent sequential approximate optimization algorithm based on (convexified) diagonal quadratic approximations. The algorithm resides in the class of globally convergent optimization methods based on conservative convex separable approximations developed by

  4. Fractal image coding by an approximation of the collage error

    Science.gov (United States)

    Salih, Ismail; Smith, Stanley H.

    1998-12-01

    In fractal image compression an image is coded as a set of contractive transformations, and is guaranteed to generate an approximation to the original image when iteratively applied to any initial image. In this paper we present a method for mapping similar regions within an image by an approximation of the collage error; that is, range blocks can be approximated by a linear combination of domain blocks.

  5. Thermodynamic properties of sticky electrolytes in the HNC/MS approximation

    International Nuclear Information System (INIS)

    Herrera, J.N.; Blum, L.

    1991-01-01

    We study an approximation for a model which combines the sticky potential of Baxter and charged spheres. In the hypernetted chain (HNC)/mean spherical approximation (MSA), simple expressions for the thermodynamic functions are obtained. There equations should be useful in representing the properties of real electrolytes. Approximate expressions that are similar to those of the primitive model are obtained, for low densities (concentrations) of the electrolyte (Author)

  6. An overview on polynomial approximation of NP-hard problems

    Directory of Open Access Journals (Sweden)

    Paschos Vangelis Th.

    2009-01-01

    Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.

  7. Dissociation between exact and approximate addition in developmental dyslexia.

    Science.gov (United States)

    Yang, Xiujie; Meng, Xiangzhi

    2016-09-01

    Previous research has suggested that number sense and language are involved in number representation and calculation, in which number sense supports approximate arithmetic, and language permits exact enumeration and calculation. Meanwhile, individuals with dyslexia have a core deficit in phonological processing. Based on these findings, we thus hypothesized that children with dyslexia may exhibit exact calculation impairment while doing mental arithmetic. The reaction time and accuracy while doing exact and approximate addition with symbolic Arabic digits and non-symbolic visual arrays of dots were compared between typically developing children and children with dyslexia. Reaction time analyses did not reveal any differences across two groups of children, the accuracies, interestingly, revealed a distinction of approximation and exact addition across two groups of children. Specifically, two groups of children had no differences in approximation. Children with dyslexia, however, had significantly lower accuracy in exact addition in both symbolic and non-symbolic tasks than that of typically developing children. Moreover, linguistic performances were selectively associated with exact calculation across individuals. These results suggested that children with dyslexia have a mental arithmetic deficit specifically in the realm of exact calculation, while their approximation ability is relatively intact. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Ignoring the irrelevant: auditory tolerance of audible but innocuous sounds in the bat-detecting ears of moths

    Science.gov (United States)

    Fullard, James H.; Ratcliffe, John M.; Jacobs, David S.

    2008-03-01

    Noctuid moths listen for the echolocation calls of hunting bats and respond to these predator cues with evasive flight. The African bollworm moth, Helicoverpa armigera, feeds at flowers near intensely singing cicadas, Platypleura capensis, yet does not avoid them. We determined that the moth can hear the cicada by observing that both of its auditory receptors (A1 and A2 cells) respond to the cicada’s song. The firing response of the A1 cell rapidly adapts to the song and develops spike periods in less than a second that are in excess of those reported to elicit avoidance flight to bats in earlier studies. The possibility also exists that for at least part of the day, sensory input in the form of olfaction or vision overrides the moth’s auditory responses. While auditory tolerance appears to allow H. armigera to exploit a food resource in close proximity to acoustic interference, it may render their hearing defence ineffective and make them vulnerable to predation by bats during the evening when cicadas continue to sing. Our study describes the first field observation of an eared insect ignoring audible but innocuous sounds.

  9. RATIONAL APPROXIMATIONS TO GENERALIZED HYPERGEOMETRIC FUNCTIONS.

    Science.gov (United States)

    Under weak restrictions on the various free parameters, general theorems for rational representations of the generalized hypergeometric functions...and certain Meijer G-functions are developed. Upon specialization, these theorems yield a sequency of rational approximations which converge to the

  10. The generalized gradient approximation in solids and molecules

    International Nuclear Information System (INIS)

    Haas, P.

    2010-01-01

    Today, most methods are based on theoretical calculations of the electronic structure of molecules, surfaces and solids on density functional theory (DFT) and the resulting Kohn-Sham equations. Unfortunately, the exact analytical expression for the exchange-correlation functional is not known and has to be approximated. The reliability of such a Kohn-Sham calculation depends i) from the numerical accuracy and ii) from the used approximation for the exchange-correlation energy. To solve the Kohn-Sham equations, the WIEN2k code, which is one of the most accurate methods for solid-state calculations, is used. The search for better approximations for the exchange-correlation energy is an intense field of research in chemistry and physics. The main objectives of the dissertation is the development, implementation and testing of advanced exchange-correlation functionals and the analysis of existing functionals. The focus of this work are GGA - functionals. Such GGA functionals are still the most widely used functionals, in particular because they are easy to implement and require little computational effort. Several recent studies have shown that an improvement of the GGA should be possible. A detailed analysis of the results will allow us to understand why a particular GGA approximation for a class of elements (compounds) works better than for another. (Kancsar) [de

  11. An approximation method for diffusion based leaching models

    International Nuclear Information System (INIS)

    Shukla, B.S.; Dignam, M.J.

    1987-01-01

    In connection with the fixation of nuclear waste in a glassy matrix equations have been derived for leaching models based on a uniform concentration gradient approximation, and hence a uniform flux, therefore requiring the use of only Fick's first law. In this paper we improve on the uniform flux approximation, developing and justifying the approach. The resulting set of equations are solved to a satisfactory approximation for a matrix dissolving at a constant rate in a finite volume of leachant to give analytical expressions for the time dependence of the thickness of the leached layer, the diffusional and dissolutional contribution to the flux, and the leachant composition. Families of curves are presented which cover the full range of all the physical parameters for this system. The same procedure can be readily extended to more complex systems. (author)

  12. Approximation of ruin probabilities via Erlangized scale mixtures

    DEFF Research Database (Denmark)

    Peralta, Oscar; Rojas-Nandayapa, Leonardo; Xie, Wangyue

    2018-01-01

    In this paper, we extend an existing scheme for numerically calculating the probability of ruin of a classical Cramér–Lundbergreserve process having absolutely continuous but otherwise general claim size distributions. We employ a dense class of distributions that we denominate Erlangized scale...... a simple methodology for constructing a sequence of distributions having the form Π⋆G with the purpose of approximating the integrated tail distribution of the claim sizes. Then we adapt a recent result which delivers an explicit expression for the probability of ruin in the case that the claim size...... distribution is modeled as an Erlangized scale mixture. We provide simplified expressions for the approximation of the probability of ruin and construct explicit bounds for the error of approximation. We complement our results with a classical example where the claim sizes are heavy-tailed....

  13. Approximate models for broken clouds in stochastic radiative transfer theory

    International Nuclear Information System (INIS)

    Doicu, Adrian; Efremenko, Dmitry S.; Loyola, Diego; Trautmann, Thomas

    2014-01-01

    This paper presents approximate models in stochastic radiative transfer theory. The independent column approximation and its modified version with a solar source computed in a full three-dimensional atmosphere are formulated in a stochastic framework and for arbitrary cloud statistics. The nth-order stochastic models describing the independent column approximations are equivalent to the nth-order stochastic models for the original radiance fields in which the gradient vectors are neglected. Fast approximate models are further derived on the basis of zeroth-order stochastic models and the independent column approximation. The so-called “internal mixing” models assume a combination of the optical properties of the cloud and the clear sky, while the “external mixing” models assume a combination of the radiances corresponding to completely overcast and clear skies. A consistent treatment of internal and external mixing models is provided, and a new parameterization of the closure coefficient in the effective thickness approximation is given. An efficient computation of the closure coefficient for internal mixing models, using a previously derived vector stochastic model as a reference, is also presented. Equipped with appropriate look-up tables for the closure coefficient, these models can easily be integrated into operational trace gas retrieval systems that exploit absorption features in the near-IR solar spectrum. - Highlights: • Independent column approximation in a stochastic setting. • Fast internal and external mixing models for total and diffuse radiances. • Efficient optimization of internal mixing models to match reference models

  14. Plasma Physics Approximations in Ares

    International Nuclear Information System (INIS)

    Managan, R. A.

    2015-01-01

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A α (ζ ),A β (ζ ), ζ, f(ζ ) = (1 + e -μ/θ )F 1/2 (μ/θ), F 1/2 '/F 1/2 , F c α , and F c β . In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  15. Approximate first integrals of a chaotic Hamiltonian system | Unal ...

    African Journals Online (AJOL)

    Approximate first integrals (conserved quantities) of a Hamiltonian dynamical system with two-degrees of freedom which arises in the modeling of galaxy have been obtained based on the approximate Noether symmetries for the resonance ω1 = ω2. Furthermore, Kolmogorov-Arnold-Moser (KAM) curves have been ...

  16. On a saddlepoint approximation to the Markov binomial distribution

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    A nonstandard saddlepoint approximation to the distribution of a sum of Markov dependent trials is introduced. The relative error of the approximation is studied, not only for the number of summands tending to infinity, but also for the parameter approaching the boundary of its definition range...

  17. Efficient approximation of black-box functions and Pareto sets

    NARCIS (Netherlands)

    Rennen, G.

    2009-01-01

    In the case of time-consuming simulation models or other so-called black-box functions, we determine a metamodel which approximates the relation between the input- and output-variables of the simulation model. To solve multi-objective optimization problems, we approximate the Pareto set, i.e. the

  18. 36 CFR 254.11 - Exchanges at approximately equal value.

    Science.gov (United States)

    2010-07-01

    ... equal value. 254.11 Section 254.11 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LANDOWNERSHIP ADJUSTMENTS Land Exchanges § 254.11 Exchanges at approximately equal value. (a) The authorized officer may exchange lands which are of approximately equal value upon a determination that: (1...

  19. Modification of linear response theory for mean-field approximations

    NARCIS (Netherlands)

    Hütter, M.; Öttinger, H.C.

    1996-01-01

    In the framework of statistical descriptions of many particle systems, the influence of mean-field approximations on the linear response theory is studied. A procedure, analogous to one where no mean-field approximation is involved, is used in order to determine the first order response of the

  20. On the mathematical treatment of the Born-Oppenheimer approximation

    International Nuclear Information System (INIS)

    Jecko, Thierry

    2014-01-01

    Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common use of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics

  1. On the mathematical treatment of the Born-Oppenheimer approximation

    Energy Technology Data Exchange (ETDEWEB)

    Jecko, Thierry, E-mail: thierry.jecko@u-cergy.fr [AGM, UMR 8088 du CNRS, Université de Cergy-Pontoise, Département de mathématiques, site de Saint Martin, 2 avenue Adolphe Chauvin, F-95000 Pontoise (France)

    2014-05-15

    Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common use of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.

  2. Nonstandard approximation schemes for lower dimensional quantum field theories

    International Nuclear Information System (INIS)

    Fitzpatrick, D.A.

    1981-01-01

    The purpose of this thesis has been to apply two different nonstandard approximation schemes to a variety of lower-dimensional schemes. In doing this, we show their applicability where (e.g., Feynman or Rayleigh-Schroedinger) approximation schemes are inapplicable. We have applied the well-known mean-field approximation scheme by Guralnik et al. to general lower dimensional theories - the phi 4 field theory in one dimension, and the massive and massless Thirring models in two dimensions. In each case, we derive a bound-state propagator and then expand the theory in terms of the original and bound-state propagators. The results obtained can be compared with previously known results thereby show, in general, reasonably good convergence. In the second half of the thesis, we develop a self-consistent quantum mechanical approximation scheme. This can be applied to any monotonic polynomial potential. It has been applied in detail to the anharmonic oscillator, and the results in several analytical domains are very good, including extensive tables of numerical results

  3. APPECT: An Approximate Backbone-Based Clustering Algorithm for Tags

    DEFF Research Database (Denmark)

    Zong, Yu; Xu, Guandong; Jin, Pin

    2011-01-01

    algorithm for Tags (APPECT). The main steps of APPECT are: (1) we execute the K-means algorithm on a tag similarity matrix for M times and collect a set of tag clustering results Z={C1,C2,…,Cm}; (2) we form the approximate backbone of Z by executing a greedy search; (3) we fix the approximate backbone...... as the initial tag clustering result and then assign the rest tags into the corresponding clusters based on the similarity. Experimental results on three real world datasets namely MedWorm, MovieLens and Dmoz demonstrate the effectiveness and the superiority of the proposed method against the traditional...... Agglomerative Clustering on tagging data, which possess the inherent drawbacks, such as the sensitivity of initialization. In this paper, we instead make use of the approximate backbone of tag clustering results to find out better tag clusters. In particular, we propose an APProximate backbonE-based Clustering...

  4. Breakdown of the few-level approximation in collective systems

    International Nuclear Information System (INIS)

    Kiffner, M.; Evers, J.; Keitel, C. H.

    2007-01-01

    The validity of the few-level approximation in dipole-dipole interacting collective systems is discussed. As an example system, we study the archetype case of two dipole-dipole interacting atoms, each modeled by two complete sets of angular momentum multiplets. We establish the breakdown of the few-level approximation by first proving the intuitive result that the dipole-dipole induced energy shifts between collective two-atom states depend on the length of the vector connecting the atoms, but not on its orientation, if complete and degenerate multiplets are considered. A careful analysis of our findings reveals that the simplification of the atomic level scheme by artificially omitting Zeeman sublevels in a few-level approximation generally leads to incorrect predictions. We find that this breakdown can be traced back to the dipole-dipole coupling of transitions with orthogonal dipole moments. Our interpretation enables us to identify special geometries in which partial few-level approximations to two- or three-level systems are valid

  5. Analytic approximation for the modified Bessel function I -2/3(x)

    Science.gov (United States)

    Martin, Pablo; Olivares, Jorge; Maass, Fernando

    2017-12-01

    In the present work an analytic approximation to modified Bessel function of negative fractional order I -2/3(x) is presented. The validity of the approximation is for every positive value of the independent variable. The accuracy is high in spite of the small number (4) of parameters used. The approximation is a combination of elementary functions with rational ones. Power series and assymptotic expansions are simultaneously used to obtain the approximation.

  6. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  7. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  8. Traveling-cluster approximation for uncorrelated amorphous systems

    International Nuclear Information System (INIS)

    Sen, A.K.; Mills, R.; Kaplan, T.; Gray, L.J.

    1984-01-01

    We have developed a formalism for including cluster effects in the one-electron Green's function for a positionally disordered (liquid or amorphous) system without any correlation among the scattering sites. This method is an extension of the technique known as the traveling-cluster approximation (TCA) originally obtained and applied to a substitutional alloy by Mills and Ratanavararaksa. We have also proved the appropriate fixed-point theorem, which guarantees, for a bounded local potential, that the self-consistent equations always converge upon iteration to a unique, Herglotz solution. To our knowledge, this is the only analytic theory for considering cluster effects. Furthermore, we have performed some computer calculations in the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results have been compared with ''exact calculations'' (which, in principle, take into account all cluster effects) and with the coherent-potential approximation (CPA), which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA and yet, apparently, the pair approximation distorts some of the features of the exact results

  9. Unambiguous results from variational matrix Pade approximants

    International Nuclear Information System (INIS)

    Pindor, Maciej.

    1979-10-01

    Variational Matrix Pade Approximants are studied as a nonlinear variational problem. It is shown that although a stationary value of the Schwinger functional is a stationary value of VMPA, the latter has also another stationary value. It is therefore proposed that instead of looking for a stationary point of VMPA, one minimizes some non-negative functional and then one calculates VMPA at the point where the former has the absolute minimum. This approach, which we call the Method of the Variational Gradient (MVG) gives unambiguous results and is also shown to minimize a distance between the approximate and the exact stationary values of the Schwinger functional

  10. Lagrangians for plasmas in drift-fluid approximation

    International Nuclear Information System (INIS)

    Pfirsch, D.; Correa-Restrepo, D.

    1996-10-01

    For drift waves and related instabilities conservation laws can play a crucial role. In an ideal theory these conservation laws are guaranteed when a Lagrangian can be found from which the equations for the various quantities result by Hamilton's principle. Such a Lagrangian for plasmas in drift-fluid approximation was obtained by a heuristic method in a recent paper by Pfirsch and Correa-Restrepo. In the present paper the same Lagrangian is derived from the exact multi-fluid Lagrangian via an iterative approximation procedure which resembles the standard method usually applied to the equations of motion. That method, however, does not guarantee all the conservation laws to hold. (orig.)

  11. Error Estimates for the Approximation of the Effective Hamiltonian

    International Nuclear Information System (INIS)

    Camilli, Fabio; Capuzzo Dolcetta, Italo; Gomes, Diogo A.

    2008-01-01

    We study approximation schemes for the cell problem arising in homogenization of Hamilton-Jacobi equations. We prove several error estimates concerning the rate of convergence of the approximation scheme to the effective Hamiltonian, both in the optimal control setting and as well as in the calculus of variations setting

  12. Mean-field approximation for spacing distribution functions in classical systems

    Science.gov (United States)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  13. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  14. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    Science.gov (United States)

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  15. Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers

    Directory of Open Access Journals (Sweden)

    Emily Szkudlarek

    2018-05-01

    Full Text Available Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1 compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2 to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children (n = 158 were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that

  16. Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers.

    Science.gov (United States)

    Szkudlarek, Emily; Brannon, Elizabeth M

    2018-01-01

    Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children ( n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic

  17. Approximate Dispersion Relations for Waves on Arbitrary Shear Flows

    Science.gov (United States)

    Ellingsen, S. À.; Li, Y.

    2017-12-01

    An approximate dispersion relation is derived and presented for linear surface waves atop a shear current whose magnitude and direction can vary arbitrarily with depth. The approximation, derived to first order of deviation from potential flow, is shown to produce good approximations at all wavelengths for a wide range of naturally occuring shear flows as well as widely used model flows. The relation reduces in many cases to a 3-D generalization of the much used approximation by Skop (1987), developed further by Kirby and Chen (1989), but is shown to be more robust, succeeding in situations where the Kirby and Chen model fails. The two approximations incur the same numerical cost and difficulty. While the Kirby and Chen approximation is excellent for a wide range of currents, the exact criteria for its applicability have not been known. We explain the apparently serendipitous success of the latter and derive proper conditions of applicability for both approximate dispersion relations. Our new model has a greater range of applicability. A second order approximation is also derived. It greatly improves accuracy, which is shown to be important in difficult cases. It has an advantage over the corresponding second-order expression proposed by Kirby and Chen that its criterion of accuracy is explicitly known, which is not currently the case for the latter to our knowledge. Our second-order term is also arguably significantly simpler to implement, and more physically transparent, than its sibling due to Kirby and Chen.Plain Language SummaryIn order to answer key questions such as how the ocean surface affects the climate, erodes the coastline and transports nutrients, we must understand how waves move. This is not so easy when depth varying currents are present, as they often are in coastal waters. We have developed a modeling tool for accurately predicting wave properties in such situations, ready for use, for example, in the complex oceanographic computer models. Our

  18. Approximated solutions to Born-Infeld dynamics

    International Nuclear Information System (INIS)

    Ferraro, Rafael; Nigro, Mauro

    2016-01-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  19. Approximated solutions to Born-Infeld dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  20. A domian Decomposition Method for Transient Neutron Transport with Pomrning-Eddington Approximation

    International Nuclear Information System (INIS)

    Hendi, A.A.; Abulwafa, E.E.

    2008-01-01

    The time-dependent neutron transport problem is approximated using the Pomraning-Eddington approximation. This approximation is two-flux approximation that expands the angular intensity in terms of the energy density and the net flux. This approximation converts the integro-differential Boltzmann equation into two first order differential equations. The A domian decomposition method that used to solve the linear or nonlinear differential equations is used to solve the resultant two differential equations to find the neutron energy density and net flux, which can be used to calculate the neutron angular intensity through the Pomraning-Eddington approximation