DEFF Research Database (Denmark)
Lange, Ann-Christina
2016-01-01
This paper provides an analysis of strategic uses of ignorance or not-knowing in one of the most secretive industries within the financial sector. The focus of the paper is on the relation between imitation and ignorance within the organizational structure of high-frequency trading (HFT) firms...... and investigate the kinds of imitations that might be produced from structures of not-knowing (i.e. structures intended to divide, obscure and protect knowledge). This point is illustrated through ethnographic studies and interviews within five HFT firms. The data show how a black-box structure of ignorance...
DEFF Research Database (Denmark)
Nottelmann, Nikolaj
2016-01-01
This chapter discusses varieties of ignorance divided according to kind (what the subject is ignorant of), degree, and order (e.g. ignorance of ignorance equals second-order ignorance). It provides analyses of notions such as factual ignorance, erotetic ignorance (ignorance of answers to question...
Ignorability for categorical data
DEFF Research Database (Denmark)
Jaeger, Manfred
2005-01-01
We study the problem of ignorability in likelihood-based inference from incomplete categorical data. Two versions of the coarsened at random assumption (car) are distinguished, their compatibility with the parameter distinctness assumption is investigated and several conditions for ignorability...
Conroy, Mary
1989-01-01
Discusses how teachers can deal with student misbehavior by ignoring negative behavior that is motivated by a desire for attention. Practical techniques are described for pinpointing attention seekers, enlisting classmates to deal with misbehaving students, ignoring misbehavior, and distinguishing behavior that responds to this technique from…
DEFF Research Database (Denmark)
Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.
We examine strategic self-ignorance—the use of ignorance as an excuse to overindulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals — a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three of five...... subjects (58 percent) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....
DEFF Research Database (Denmark)
Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.
2016-01-01
We examine strategic self-ignorance—the use of ignorance as an excuse to over-indulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals—a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three of five...... subjects (58%) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....
Ignorance, information and autonomy
Harris, J.; Keywood, K.
2001-01-01
People have a powerful interest in genetic privacy and its associated claim to ignorance, and some equally powerful desires to be shielded from disturbing information are often voiced. We argue, however, that there is no such thing as a right to remain in ignorance, where a right is understood as an entitlement that trumps competing claims. This does not of course mean that information must always be forced upon unwilling recipients, only that there is no prima facie entitlement to be protect...
Directory of Open Access Journals (Sweden)
Mahmoud Eid
2012-06-01
Full Text Available The clash of ignorance thesis presents a critique of the clash of civilizations theory. It challenges the assumptions that civilizations are monolithic entities that do not interact and that the Self and the Other are always opposed to each other. Despite some significantly different values and clashes between Western and Muslim civilizations, they overlap with each other in many ways and have historically demonstrated the capacity for fruitful engagement. The clash of ignorance thesis makes a significant contribution to the understanding of intercultural and international communication as well as to the study of inter-group relations in various other areas of scholarship. It does this by bringing forward for examination the key impediments to mutually beneficial interaction between groups. The thesis directly addresses the particular problem of ignorance that other epistemological approaches have not raised in a substantial manner. Whereas the critique of Orientalism deals with the hegemonic construction of knowledge, the clash of ignorance paradigm broadens the inquiry to include various actors whose respective distortions of knowledge symbiotically promote conflict with each other. It also augments the power-knowledge model to provide conceptual and analytical tools for understanding the exploitation of ignorance for the purposes of enhancing particular groups’ or individuals’ power. Whereas academics, policymakers, think tanks, and religious leaders have referred to the clash of ignorance concept, this essay contributes to its development as a theory that is able to provide a valid basis to explain the empirical evidence drawn from relevant cases.
Ignorance, information and autonomy.
Harris, J; Keywood, K
2001-09-01
People have a powerful interest in genetic privacy and its associated claim to ignorance, and some equally powerful desires to be shielded from disturbing information are often voiced. We argue, however, that there is no such thing as a right to remain in ignorance, where a fight is understood as an entitlement that trumps competing claims. This does not of course mean that information must always be forced upon unwilling recipients, only that there is no prima facie entitlement to be protected from true or honest information about oneself. Any claims to be shielded from information about the self must compete on equal terms with claims based in the rights and interests of others. In balancing the weight and importance of rival considerations about giving or withholding information, if rights claims have any place, rights are more likely to be defensible on the side of honest communication of information rather than in defence of ignorance. The right to free speech and the right to decline to accept responsibility to take decisions for others imposed by those others seem to us more plausible candidates for fully fledged rights in this field than any purported right to ignorance. Finally, and most importantly, if the right to autonomy is invoked, a proper understanding of the distinction between claims to liberty and claims to autonomy show that the principle of autonomy, as it is understood in contemporary social ethics and English law, supports the giving rather than the withholding of information in most circumstances.
Son, Lisa K; Kornell, Nate
2010-02-01
Although ignorance and uncertainty are usually unwelcome feelings, they have unintuitive advantages for both human and non-human animals, which we review here. We begin with the perils of too much information: expertise and knowledge can come with illusions (and delusions) of knowing. We then describe how withholding information can counteract these perils: providing people with less information enables them to judge more precisely what they know and do not know, which in turn enhances long-term memory. Data are presented from a new experiment that illustrates how knowing what we do not know can result in helpful choices and enhanced learning. We conclude by showing that ignorance can be a virtue, as long as it is recognized and rectified. Copyright 2009 Elsevier B.V. All rights reserved.
The logic of strategic ignorance.
McGoey, Linsey
2012-09-01
Ignorance and knowledge are often thought of as opposite phenomena. Knowledge is seen as a source of power, and ignorance as a barrier to consolidating authority in political and corporate arenas. This article disputes this, exploring the ways that ignorance serves as a productive asset, helping individuals and institutions to command resources, deny liability in the aftermath of crises, and to assert expertise in the face of unpredictable outcomes. Through a focus on the Food and Drug Administration's licensing of Ketek, an antibiotic drug manufactured by Sanofi-Aventis and linked to liver failure, I suggest that in drug regulation, different actors, from physicians to regulators to manufacturers, often battle over who can attest to the least knowledge of the efficacy and safety of different drugs - a finding that raises new insights about the value of ignorance as an organizational resource. © London School of Economics and Political Science 2012.
Ignoring Ignorance: Notes on Pedagogical Relationships in Citizen Science
Directory of Open Access Journals (Sweden)
Michael Scroggins
2017-04-01
Full Text Available Theoretically, this article seeks to broaden the conceptualization of ignorance within STS by drawing on a line of theory developed in the philosophy and anthropology of education to argue that ignorance can be productively conceptualized as a state of possibility and that doing so can enable more democratic forms of citizen science. In contrast to conceptualizations of ignorance as a lack, lag, or manufactured product, ignorance is developed here as both the opening move in scientific inquiry and the common ground over which that inquiry proceeds. Empirically, the argument is developed through an ethnographic description of Scroggins' participation in a failed citizen science project at a DIYbio laboratory. Supporting the empirical case are a review of the STS literature on expertise and a critical examination of the structures of participation within two canonical citizen science projects. Though onerous, through close attention to how people transform one another during inquiry, increasingly democratic forms of citizen science, grounded in the commonness of ignorance, can be put into practice.
International Nuclear Information System (INIS)
Vrana, Péter; Reeb, David; Reitzner, Daniel; Wolf, Michael M
2014-01-01
We investigate the problem of quantum searching on a noisy quantum computer. Taking a fault-ignorant approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm. (paper)
Traffic forecasts ignoring induced demand
DEFF Research Database (Denmark)
Næss, Petter; Nicolaisen, Morten Skou; Strand, Arvid
2012-01-01
the model calculations included only a part of the induced traffic, the difference in cost-benefit results compared to the model excluding all induced traffic was substantial. The results show lower travel time savings, more adverse environmental impacts and a considerably lower benefitcost ratio when...... induced traffic is partly accounted for than when it is ignored. By exaggerating the economic benefits of road capacity increase and underestimating its negative effects, omission of induced traffic can result in over-allocation of public money on road construction and correspondingly less focus on other...... performance of a proposed road project in Copenhagen with and without short-term induced traffic included in the transport model. The available transport model was not able to include long-term induced traffic resulting from changes in land use and in the level of service of public transport. Even though...
The Power of Ignorance | Code | Philosophical Papers
African Journals Online (AJOL)
Taking my point of entry from George Eliot's reference to 'the power of Ignorance', I analyse some manifestations of that power as she portrays it in the life of a young woman of affluence, in her novel Daniel Deronda. Comparing and contrasting this kind of ignorance with James Mill's avowed ignorance of local tradition and ...
From dissecting ignorance to solving algebraic problems
International Nuclear Information System (INIS)
Ayyub, Bilal M.
2004-01-01
Engineers and scientists are increasingly required to design, test, and validate new complex systems in simulation environments and/or with limited experimental results due to international and/or budgetary restrictions. Dealing with complex systems requires assessing knowledge and information by critically evaluating them in terms relevance, completeness, non-distortion, coherence, and other key measures. Using the concepts and definitions from evolutionary knowledge and epistemology, ignorance is examined and classified in the paper. Two ignorance states for a knowledge agent are identified: (1) non-reflective (or blind) state, i.e. the person does not know of self-ignorance, a case of ignorance of ignorance; and (2) reflective state, i.e. the person knows and recognizes self-ignorance. Ignorance can be viewed to have a hierarchal classification based on its sources and nature as provided in the paper. The paper also explores limits on knowledge construction, closed and open world assumptions, and fundamentals of evidential reasoning using belief revision and diagnostics within the framework of ignorance analysis for knowledge construction. The paper also examines an algebraic problem set as identified by Sandia National Laboratories to be a basic building block for uncertainty propagation in computational mechanics. Solution algorithms are provided for the problem set for various assumptions about the state of knowledge about its parameters
On the Rationality of Pluralistic Ignorance
DEFF Research Database (Denmark)
Bjerring, Jens Christian Krarup; Hansen, Jens Ulrik; Pedersen, Nikolaj Jang Lee Linding
2014-01-01
Pluralistic ignorance is a socio-psychological phenomenon that involves a systematic discrepancy between people’s private beliefs and public behavior in cer- tain social contexts. Recently, pluralistic ignorance has gained increased attention in formal and social epistemology. But to get clear...
From dissecting ignorance to solving algebraic problems
Energy Technology Data Exchange (ETDEWEB)
Ayyub, Bilal M
2004-09-01
Engineers and scientists are increasingly required to design, test, and validate new complex systems in simulation environments and/or with limited experimental results due to international and/or budgetary restrictions. Dealing with complex systems requires assessing knowledge and information by critically evaluating them in terms relevance, completeness, non-distortion, coherence, and other key measures. Using the concepts and definitions from evolutionary knowledge and epistemology, ignorance is examined and classified in the paper. Two ignorance states for a knowledge agent are identified: (1) non-reflective (or blind) state, i.e. the person does not know of self-ignorance, a case of ignorance of ignorance; and (2) reflective state, i.e. the person knows and recognizes self-ignorance. Ignorance can be viewed to have a hierarchal classification based on its sources and nature as provided in the paper. The paper also explores limits on knowledge construction, closed and open world assumptions, and fundamentals of evidential reasoning using belief revision and diagnostics within the framework of ignorance analysis for knowledge construction. The paper also examines an algebraic problem set as identified by Sandia National Laboratories to be a basic building block for uncertainty propagation in computational mechanics. Solution algorithms are provided for the problem set for various assumptions about the state of knowledge about its parameters.
Ignorance-Based Instruction in Higher Education.
Stocking, S. Holly
1992-01-01
Describes how three groups of educators (in a medical school, a psychology department, and a journalism school) are helping instructors and students to recognize, manage, and use ignorance to promote learning. (SR)
Is Ignorance of Climate Change Culpable?
Robichaud, Philip
2017-10-01
Sometimes ignorance is an excuse. If an agent did not know and could not have known that her action would realize some bad outcome, then it is plausible to maintain that she is not to blame for realizing that outcome, even when the act that leads to this outcome is wrong. This general thought can be brought to bear in the context of climate change insofar as we think (a) that the actions of individual agents play some role in realizing climate harms and (b) that these actions are apt targets for being considered right or wrong. Are agents who are ignorant about climate change and the way their actions contribute to it excused because of their ignorance, or is their ignorance culpable? In this paper I examine these questions from the perspective of recent developments in the theories of responsibility for ignorant action and characterize their verdicts. After developing some objections to existing attempts to explore these questions, I characterize two influential theories of moral responsibility and discuss their implications for three different types of ignorance about climate change. I conclude with some recommendations for how we should react to the face of the theories' conflicting verdicts. The answer to the question posed in the title, then, is: "Well, it's complicated."
Knowledge, responsibility, decision making and ignorance
DEFF Research Database (Denmark)
Huniche, Lotte
2001-01-01
of and ignoring) seems to be commonly applicable to describing persons living at risk for Huntington´s Disease (HD). So what does everyday conduct of life look like from an "ignorance" perspective? And how can we discuss and argue about morality and ethics taking these seemingly diverse ways of living at risk...... into account? Posing this question, I hope to contribute to new reflections on possibilities and constraints in people´s lives with HD as well as in research and to open up new ways of discussing "right and wrong"....
DMPD: TLR ignores methylated RNA? [Dynamic Macrophage Pathway CSML Database
Lifescience Database Archive (English)
Full Text Available 16111629 TLR ignores methylated RNA? Ishii KJ, Akira S. Immunity. 2005 Aug;23(2):11...1-3. (.png) (.svg) (.html) (.csml) Show TLR ignores methylated RNA? PubmedID 16111629 Title TLR ignores methylated
Should general psychiatry ignore somatization and hypochondriasis?
Creed, Francis
2006-10-01
This paper examines the tendency for general psychiatry to ignore somatization and hypochondriasis. These disorders are rarely included in national surveys of mental health and are not usually regarded as a concern of general psychiatrists; yet primary care doctors and other physicians often feel let down by psychiatry's failure to offer help in this area of medical practice. Many psychiatrists are unaware of the suffering, impaired function and high costs that can result from these disorders, because these occur mainly within primary care and secondary medical services. Difficulties in diagnosis and a tendency to regard them as purely secondary phenomena of depression, anxiety and related disorders mean that general psychiatry may continue to ignore somatization and hypochondriasis. If general psychiatry embraced these disorders more fully, however, it might lead to better prevention and treatment of depression as well as helping to prevent the severe disability that may arise in association with these disorders.
Should general psychiatry ignore somatization and hypochondriasis?
CREED, FRANCIS
2006-01-01
This paper examines the tendency for general psychiatry to ignore somatization and hypochondriasis. These disorders are rarely included in national surveys of mental health and are not usually regarded as a concern of general psychiatrists; yet primary care doctors and other physicians often feel let down by psychiatry's failure to offer help in this area of medical practice. Many psychiatrists are unaware of the suffering, impaired function and high costs that can result fr...
Issues ignored in laboratory quality surveillance
International Nuclear Information System (INIS)
Zeng Jing; Li Xingyuan; Zhang Tingsheng
2008-01-01
According to the work requirement of the related laboratory quality surveillance in ISO17025, this paper analyzed and discussed the issued ignored in the laboratory quality surveillance. In order to solve the present problem, it is required to understand the work responsibility in the quality surveillance correctly, to establish the effective working routine in the quality surveillance, and to conduct, the quality surveillance work. The object in the quality surveillance shall be 'the operator' who engaged in the examination/calibration directly in the laboratory, especially the personnel in training (who is engaged in the examination/calibration). The quality supervisors shall be fully authorized, so that they can correctly understand the work responsibility in quality surveillance, and are with the rights for 'full supervision'. The laboratory also shall arrange necessary training to the quality supervisor, so that they can obtain sufficient guide in time and are with required qualification or occupation prerequisites. (authors)
Ignorance of electrosurgery among obstetricians and gynaecologists.
Mayooran, Zorana; Pearce, Scott; Tsaltas, Jim; Rombauts, Luk; Brown, T Ian H; Lawrence, Anthony S; Fraser, Kym; Healy, David L
2004-12-01
The purpose of this study was to assess the level of skill of laparoscopic surgeons in electrosurgery. Subjects were asked to complete a practical diathermy station and a written test of electrosurgical knowledge. Tests were held in teaching and non-teaching hospitals. Twenty specialists in obstetrics and gynaecology were randomly selected and tested on the Monash University gynaecological laparoscopic pelvi-trainer. Twelve candidates were consultants with 9-28 years of practice in operative laparoscopy, and 8 were registrars with up to six years of practice in operative laparoscopy. Seven consultants and one registrar were from rural Australia, and three consultants were from New Zealand. Candidates were marked with checklist criteria resulting in a pass/fail score, as well as a weighted scoring system. We retested 11 candidates one year later with the same stations. No improvement in electrosurgery skill in one year of obstetric and gynaecological practice. No candidate successfully completed the written electrosurgery station in the initial test. A slight improvement in the pass rate to 18% was observed in the second test. The pass rate of the diathermy station dropped from 50% to 36% in the second test. The study found ignorance of electrosurgery/diathermy among gynaecological surgeons. One year later, skills were no better.
Aspiring to Spectral Ignorance in Earth Observation
Oliver, S. A.
2016-12-01
Enabling robust, defensible and integrated decision making in the Era of Big Earth Data requires the fusion of data from multiple and diverse sensor platforms and networks. While the application of standardised global grid systems provides a common spatial analytics framework that facilitates the computationally efficient and statistically valid integration and analysis of these various data sources across multiple scales, there remains the challenge of sensor equivalency; particularly when combining data from different earth observation satellite sensors (e.g. combining Landsat and Sentinel-2 observations). To realise the vision of a sensor ignorant analytics platform for earth observation we require automation of spectral matching across the available sensors. Ultimately, the aim is to remove the requirement for the user to possess any sensor knowledge in order to undertake analysis. This paper introduces the concept of spectral equivalence and proposes a methodology through which equivalent bands may be sourced from a set of potential target sensors through application of equivalence metrics and thresholds. A number of parameters can be used to determine whether a pair of spectra are equivalent for the purposes of analysis. A baseline set of thresholds for these parameters and how to apply them systematically to enable relation of spectral bands amongst numerous different sensors is proposed. The base unit for comparison in this work is the relative spectral response. From this input, determination of a what may constitute equivalence can be related by a user, based on their own conceptualisation of equivalence.
Beyond duplicity and ignorance in global fisheries
Directory of Open Access Journals (Sweden)
Daniel Pauly
2009-06-01
Full Text Available The three decades following World War II were a period of rapidly increasing fishing effort and landings, but also of spectacular collapses, particularly in small pelagic fish stocks. This is also the period in which a toxic triad of catch underreporting, ignoring scientific advice and blaming the environment emerged as standard response to ongoing fisheries collapses, which became increasingly more frequent, finally engulfing major North Atlantic fisheries. The response to the depletion of traditional fishing grounds was an expansion of North Atlantic (and generally of northern hemisphere fisheries in three dimensions: southward, into deeper waters and into new taxa, i.e. catching and marketing species of fish and invertebrates previously spurned, and usually lower in the food web. This expansion provided many opportunities for mischief, as illustrated by the European Union’s negotiated ‘agreements’ for access to the fish resources of Northwest Africa, China’s agreement-fee exploitation of the same, and Japan blaming the resulting resource declines on the whales. Also, this expansion provided new opportunities for mislabelling seafood unfamiliar to North Americans and Europeans, and misleading consumers, thus reducing the impact of seafood guides and similar effort toward sustainability. With fisheries catches declining, aquaculture—despite all public relation efforts—not being able to pick up the slack, and rapidly increasing fuel prices, structural changes are to be expected in both the fishing industry and the scientific disciplines that study it and influence its governance. Notably, fisheries biology, now predominantly concerned with the welfare of the fishing industry, will have to be converted into fisheries conservation science, whose goal will be to resolve the toxic triad alluded to above, and thus maintain the marine biodiversity and ecosystems that provide existential services to fisheries. Similarly, fisheries
Learning to ignore: acquisition of sustained attentional suppression.
Dixon, Matthew L; Ruppel, Justin; Pratt, Jay; De Rosa, Eve
2009-04-01
We examined whether the selection mechanisms committed to the suppression of ignored stimuli can be modified by experience to produce a sustained, rather than transient, change in behavior. Subjects repeatedly ignored the shape of stimuli, while attending to their color. On subsequent attention to shape, there was a robust and sustained decrement in performance that was selective to when shape was ignored across multiple-color-target contexts, relative to a single-color-target context. Thus, amount of time ignored was not sufficient to induce a sustained performance decrement. Moreover, in this group, individual differences in initial color target selection were associated with the subsequent performance decrement when attending to previously ignored stimuli. Accompanying this sustained decrement in performance was a transfer in the locus of suppression from an exemplar (e.g., a circle) to a feature (i.e., shape) level of representation. These data suggest that learning can influence attentional selection by sustained attentional suppression of ignored stimuli.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Schmidt, Wolfgang M
1980-01-01
"In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)
On strategic ignorance of environmental harm and social norms
DEFF Research Database (Denmark)
Thunström, Linda; van 't Veld, Klaas; Shogren, Jason
, and that they use ignorance as an excuse to engage in less pro-environmental behavior. It also predicts that the cost of ignorance increases if people can learn about the social norm from the information. We test the model predictions empirically with an experiment that involves an imaginary long- distance flight...... and an option to buy offsets for the flight’s carbon footprint. More than half (53 percent) of the subjects choose to ignore information on the carbon footprint alone before deciding their offset purchase, but ignorance significantly decreases (to 29 percent) when the information additionally reveals the social...
Energy Technology Data Exchange (ETDEWEB)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com [Wageningen University, P.O. Box 338, Wageningen 6700 AH (Netherlands); Heijungs, R. [Vrije Universiteit Amsterdam, De Boelelaan 1105, Amsterdam 1081 HV (Netherlands); Leiden University, Einsteinweg 2, Leiden 2333 CC (Netherlands)
2017-01-15
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlations between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.
On strategic ignorance of environmental harm and social norms
DEFF Research Database (Denmark)
Thunström, Linda; van’t Veld, Klaas; Shogren, Jason. F.
2014-01-01
decreases (to 29 percent) when the information additionally reveals the share of air travelers who buy carbon offsets. We find evidence that some people use ignorance as an excuse to reduce pro-environmental behavior—ignorance significantly decreases the probability of buying carbon offsets.......Are people strategically ignorant of the negative externalities their activities cause the environment? Herein we examine if people avoid costless information on those externalities and use ignorance as an excuse to reduce pro-environmental behavior. We develop a theoretical framework in which...... people feel internal pressure (“guilt”) from causing harm to the environment (e.g., emitting carbon dioxide) as well as external pressure to conform to the social norm for pro-environmental behavior (e.g., offsetting carbon emissions). Our model predicts that people may benefit from avoiding information...
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
Diophantine approximation and badly approximable sets
DEFF Research Database (Denmark)
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
. The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...
Willful Ignorance and the Death Knell of Critical Thought
Rubin, Daniel Ian
2018-01-01
Independent, critical thought has never been more important in the United States. In the Age of Trump, political officials spout falsehoods called "alternative facts" as if they were on equal footing with researchable, scientific data. At the same time, an unquestioning populace engages in acts of "willful ignorance" on a daily…
Tunnel Vision: New England Higher Education Ignores Demographic Peril
Hodgkinson, Harold L.
2004-01-01
This author states that American higher education ignores about 90 percent of the environment in which it operates. Colleges change admissions requirements without even informing high schools in their service areas. Community college graduates are denied access to four-year programs because of policy changes made only after it was too late for the…
Professional orientation and pluralistic ignorance among jail correctional officers.
Cook, Carrie L; Lane, Jodi
2014-06-01
Research about the attitudes and beliefs of correctional officers has historically been conducted in prison facilities while ignoring jail settings. This study contributes to our understanding of correctional officers by examining the perceptions of those who work in jails, specifically measuring professional orientations about counseling roles, punitiveness, corruption of authority by inmates, and social distance from inmates. The study also examines whether officers are accurate in estimating these same perceptions of their peers, a line of inquiry that has been relatively ignored. Findings indicate that the sample was concerned about various aspects of their job and the management of inmates. Specifically, officers were uncertain about adopting counseling roles, were somewhat punitive, and were concerned both with maintaining social distance from inmates and with an inmate's ability to corrupt their authority. Officers also misperceived the professional orientation of their fellow officers and assumed their peer group to be less progressive than they actually were.
'More is less'. The tax effects of ignoring flow externalities
International Nuclear Information System (INIS)
Sandal, Leif K.; Steinshamn, Stein Ivar; Grafton, R. Quentin
2003-01-01
Using a model of non-linear, non-monotone decay of the stock pollutant, and starting from the same initial conditions, the paper shows that an optimal tax that corrects for both stock and flow externalities may result in a lower tax, fewer cumulative emissions (less decay in emissions) and higher output at the steady state than a corrective tax that ignores the flow externality. This 'more is less' result emphasizes that setting a corrective tax that ignores the flow externality, or imposing a corrective tax at too low a level where there exists only a stock externality, may affect both transitory and steady-state output, tax payments and cumulative emissions. The result has important policy implications for decision makers setting optimal corrective taxes and targeted emission limits whenever stock externalities exist
Egoism, ignorance and choice : on society's lethal infection
Camilleri, Jonathan
2015-01-01
The ability to choose and our innate selfish, or rather, self-preservative urges are a recipe for disaster. Combining this with man's ignorance by definition and especially his general refusal to accept it, inevitably leads to Man's demise as a species. It is our false notion of freedom which contributes directly to our collective death, and therefore, man's trying to escape death is, in the largest of ways, counterproductive.
The importance of ignoring: Alpha oscillations protect selectivity
Payne, Lisa; Sekuler, Robert
2014-01-01
Selective attention is often thought to entail an enhancement of some task-relevant stimulus or attribute. We discuss the perspective that ignoring irrelevant, distracting information plays a complementary role in information processing. Cortical oscillations within the alpha (8–14 Hz) frequency band have emerged as a marker of sensory suppression. This suppression is linked to selective attention for visual, auditory, somatic, and verbal stimuli. Inhibiting processing of irrelevant input mak...
Maggots in the Brain: Sequelae of Ignored Scalp Wound.
Aggarwal, Ashish; Maskara, Prasant
2018-01-01
A 26-year-old male had suffered a burn injury to his scalp in childhood and ignored it. He presented with a complaint of something crawling on his head. Inspection of his scalp revealed multiple maggots on the brain surface with erosion of overlying bone and scalp. He was successfully managed by surgical debridement and regular dressing. Copyright © 2017 Elsevier Inc. All rights reserved.
On uncertainty in information and ignorance in knowledge
Ayyub, Bilal M.
2010-05-01
This paper provides an overview of working definitions of knowledge, ignorance, information and uncertainty and summarises formalised philosophical and mathematical framework for their analyses. It provides a comparative examination of the generalised information theory and the generalised theory of uncertainty. It summarises foundational bases for assessing the reliability of knowledge constructed as a collective set of justified true beliefs. It discusses system complexity for ancestor simulation potentials. It offers value-driven communication means of knowledge and contrarian knowledge using memes and memetics.
Can Strategic Ignorance Explain the Evolution of Love?
Bear, Adam; Rand, David G
2018-04-24
People's devotion to, and love for, their romantic partners poses an evolutionary puzzle: Why is it better to stop your search for other partners once you enter a serious relationship when you could continue to search for somebody better? A recent formal model based on "strategic ignorance" suggests that such behavior can be adaptive and favored by natural selection, so long as you can signal your unwillingness to "look" for other potential mates to your current partner. Here, we re-examine this conclusion with a more detailed model designed to capture specific features of romantic relationships. We find, surprisingly, that devotion does not typically evolve in our model: Selection favors agents who choose to "look" while in relationships and who allow their partners to do the same. Non-looking is only expected to evolve if there is an extremely large cost associated with being left by your partner. Our results therefore raise questions about the role of strategic ignorance in explaining the evolution of love. Copyright © 2018 Cognitive Science Society, Inc.
Exploitation of commercial remote sensing images: reality ignored?
Allen, Paul C.
1999-12-01
The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.
The Marley hypothesis: denial of racism reflects ignorance of history.
Nelson, Jessica C; Adams, Glenn; Salter, Phia S
2013-02-01
This study used a signal detection paradigm to explore the Marley hypothesis--that group differences in perception of racism reflect dominant-group denial of and ignorance about the extent of past racism. White American students from a midwestern university and Black American students from two historically Black universities completed surveys about their historical knowledge and perception of racism. Relative to Black participants, White participants perceived less racism in both isolated incidents and systemic manifestations of racism. They also performed worse on a measure of historical knowledge (i.e., they did not discriminate historical fact from fiction), and this group difference in historical knowledge mediated the differences in perception of racism. Racial identity relevance moderated group differences in perception of systemic manifestations of racism (but not isolated incidents), such that group differences were stronger among participants who scored higher on a measure of racial identity relevance. The results help illuminate the importance of epistemologies of ignorance: cultural-psychological tools that afford denial of and inaction about injustice.
The end of ignorance multiplying our human potential
Mighton, John
2008-01-01
A revolutionary call for a new understanding of how people learn. The End of Ignorance conceives of a world in which no child is left behind – a world based on the assumption that each child has the potential to be successful in every subject. John Mighton argues that by recognizing the barriers that we have experienced in our own educational development, by identifying the moment that we became disenchanted with a certain subject and forever closed ourselves off to it, we will be able to eliminate these same barriers from standing in the way of our children. A passionate examination of our present education system, The End of Ignorance shows how we all can work together to reinvent the way that we are taught. John Mighton, the author of The Myth of Ability, is the founder of JUMP Math, a system of learning based on the fostering of emergent intelligence. The program has proved so successful an entire class of Grade 3 students, including so-called slow learners, scored over 90% on a Grade 6 math test. A ...
Lessons in Equality: From Ignorant Schoolmaster to Chinese Aesthetics
Directory of Open Access Journals (Sweden)
Ernest Ženko
2017-09-01
Full Text Available The postponement of equality is not only a recurring topic in Jacques Rancière’s writings, but also the most defining feature of modern Chinese aesthetics. Particularly in the period after 1980’s, when the country opened its doors to Western ideas, Chinese aesthetics extensively played a subordinate role in an imbalanced knowledge transfer, in which structural inequality was only reinforced. Aesthetics in China plays an important role and is expected not only to interpret literature and art, but also to help building a harmonious society within globalized world. This is the reason why some commentators – Wang Jianjiang being one of them – point out that it is of utmost importance to eliminate this imbalance and develop proper Chinese aesthetics. Since the key issue in this development is the problem of inequality, an approach developed by Jacques Rancière, “the philosopher of equality”, is proposed. Even though Rancière wrote extensively about literature, art and aesthetics, in order to confront the problem of Chinese aesthetics, it seems that a different approach, found in his repertoire, could prove to be more fruitful. In 1987, he published a book titled The Ignorant Schoolmaster, which contributed to his ongoing philosophical emancipatory project, and focused on inequality and its conditions in the realm of education. The Ignorant Schoolmaster, nonetheless, stretches far beyond the walls of classroom or even educational system, and brings to the fore political implications that cluster around the fundamental core of Rancière's political philosophy: the definition of politics as the verification of the presupposition of the equality of intelligence. Equality cannot be postponed as a goal to be only attained in the future and, therefore, has to be considered as a premise of egalitarian politics that needs to operate as a presupposition. Article received: May 21, 2017; Article accepted: May 28, 2017; Published online
The importance of ignoring: Alpha oscillations protect selectivity.
Payne, Lisa; Sekuler, Robert
2014-06-01
Selective attention is often thought to entail an enhancement of some task-relevant stimulus or attribute. We discuss the perspective that ignoring irrelevant, distracting information plays a complementary role in information processing. Cortical oscillations within the alpha (8-14 Hz) frequency band have emerged as a marker of sensory suppression. This suppression is linked to selective attention for visual, auditory, somatic, and verbal stimuli. Inhibiting processing of irrelevant input makes responses more accurate and timely. It also helps protect material held in short-term memory against disruption. Furthermore, this selective process keeps irrelevant information from distorting the fidelity of memories. Memory is only as good as the perceptual representations on which it is based, and on whose maintenance it depends. Modulation of alpha oscillations can be exploited as an active, purposeful mechanism to help people pay attention and remember the things that matter.
International Nuclear Information System (INIS)
Ginsburg, C.A.
1980-01-01
In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)
What Is Hospitality in the Academy? Epistemic Ignorance and the (Im)possible Gift
Kuokkanen, Rauna
2008-01-01
The academy is considered by many as the major Western institution of knowledge. This article, however, argues that the academy is characterized by prevalent "epistemic ignorance"--a concept informed by Spivak's discussion of "sanctioned ignorance." Epistemic ignorance refers to academic practices and discourses that enable the continued exclusion…
Bowker, Julie C.; Adams, Ryan E.; Fredstrom, Bridget K.; Gilman, Rich
2014-01-01
In this study on being ignored by peers, 934 twelfth-grade students reported on their experiences of being ignored, victimized, and socially withdrawn, and completed measures of friendship and psychological adjustment (depression, self-esteem, and global satisfaction). Peer nominations of being ignored, victimized, and accepted by peers were also…
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
Hooking up: Gender Differences, Evolution, and Pluralistic Ignorance
Directory of Open Access Journals (Sweden)
Chris Reiber
2010-07-01
Full Text Available “Hooking-up” – engaging in no-strings-attached sexual behaviors with uncommitted partners - has become a norm on college campuses, and raises the potential for disease, unintended pregnancy, and physical and psychological trauma. The primacy of sex in the evolutionary process suggests that predictions derived from evolutionary theory may be a useful first step toward understanding these contemporary behaviors. This study assessed the hook-up behaviors and attitudes of 507 college students. As predicted by behavioral-evolutionary theory: men were more comfortable than women with all types of sexual behaviors; women correctly attributed higher comfort levels to men, but overestimated men's actual comfort levels; and men correctly attributed lower comfort levels to women, but still overestimated women's actual comfort levels. Both genders attributed higher comfort levels to same-gendered others, reinforcing a pluralistic ignorance effect that might contribute to the high frequency of hook-up behaviors in spite of the low comfort levels reported and suggesting that hooking up may be a modern form of intrasexual competition between females for potential mates.
Sarajevo: Politics and Cultures of Remembrance and Ignorance
Directory of Open Access Journals (Sweden)
Adla Isanović
2017-10-01
Full Text Available This text critically reflects on cultural events organized to mark the 100th anniversary of the start of the First World War in Sarajevo and Bosnia & Herzegovina. It elaborates on disputes which showed that culture is in the centre of identity politics and struggles (which can also take a fascist nationalist form, accept the colonizer’s perspective, etc., on how commemorations ‘swallowed’ the past and present, but primarily contextualizes, historicizes and politicizes Sarajevo 2014 and its politics of visibility. This case is approached as an example and symptomatic of the effects of the current state of capitalism, coloniality, racialization and subjugation, as central to Europe today. Article received: June 2, 2017; Article accepted: June 8, 2017; Published online: October 15, 2017; Original scholarly paper How to cite this article: Isanović, Adla. "Sarajevo: Politics and Cultures of Remembrance and Ignorance." AM Journal of Art and Media Studies 14 (2017: 133-144. doi: 10.25038/am.v0i14.199
Technology trends in econometric energy models: Ignorance or information?
International Nuclear Information System (INIS)
Boyd, G.; Kokkelenberg, E.; State Univ., of New York, Binghamton, NY; Ross, M.; Michigan Univ., Ann Arbor, MI
1991-01-01
Simple time trend variables in factor demand models can be statistically powerful variables, but may tell the researcher very little. Even more complex specification of technical change, e.g. factor biased, are still the economentrician's ''measure of ignorance'' about the shifts that occur in the underlying production process. Furthermore, in periods of rapid technology change the parameters based on time trends may be too large for long run forecasting. When there is clearly identifiable engineering information about new technology adoption that changes the factor input mix, data for the technology adoption may be included in the traditional factor demand model to economically model specific factor biased technical change and econometrically test their contribution. The adoption of thermomechanical pulping (TMP) and electric are furnaces (EAF) are two electricity intensive technology trends in the Paper and Steel industries, respectively. This paper presents the results of including these variables in a tradition econometric factor demand model, which is based on the Generalized Leontief. The coefficients obtained for this ''engineering based'' technical change compares quite favorably to engineering estimates of the impact of TMP and EAF on electricity intensities, improves the estimates of the other price coefficients, and yields a more believable long run electricity forecast. 6 refs., 1 fig
Approximate symmetries of Hamiltonians
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
Approximating distributions from moments
Pawula, R. F.
1987-11-01
A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.
CONTRIBUTIONS TO RATIONAL APPROXIMATION,
Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)
Approximation techniques for engineers
Komzsik, Louis
2006-01-01
Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.
Should we ignore U-235 series contribution to dose?
International Nuclear Information System (INIS)
Beaugelin-Seiller, Karine; Goulet, Richard; Mihok, Steve; Beresford, Nicholas A.
2016-01-01
Environmental Risk Assessment (ERA) methodology for radioactive substances is an important regulatory tool for assessing the safety of licensed nuclear facilities for wildlife, and the environment as a whole. ERAs are therefore expected to be both fit for purpose and conservative. When uranium isotopes are assessed, there are many radioactive decay products which could be considered. However, risk assessors usually assume 235 U and its daughters contribute negligibly to radiological dose. The validity of this assumption has not been tested: what might the 235 U family contribution be and how does the estimate depend on the assumptions applied? In this paper we address this question by considering aquatic wildlife in Canadian lakes exposed to historic uranium mining practices. A full theoretical approach was used, in parallel to a more realistic assessment based on measurements of several elements of the U decay chains. The 235 U family contribution varied between about 4% and 75% of the total dose rate depending on the assumptions of the equilibrium state of the decay chains. Hence, ignoring the 235 U series will not result in conservative dose assessments for wildlife. These arguments provide a strong case for more in situ measurements of the important members of the 235 U chain and for its consideration in dose assessments. - Highlights: • Realistic ecological risk assessment infers a complete inventory of radionuclides. • U-235 family may not be minor when assessing total dose rates experienced by biota. • There is a need to investigate the real state of equilibrium decay of U chains. • There is a need to improve the capacity to measure all elements of the U decay chains.
Expectation Consistent Approximate Inference
DEFF Research Database (Denmark)
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
That Escalated Quickly—Planning to Ignore RPE Can Backfire
Directory of Open Access Journals (Sweden)
Maik Bieleke
2017-09-01
Full Text Available Ratings of perceived exertion (RPE are routinely assessed in exercise science and RPE is substantially associated with physiological criterion measures. According to the psychobiological model of endurance, RPE is a central limiting factor in performance. While RPE is known to be affected by psychological manipulations, it remains to be examined whether RPE can be self-regulated during static muscular endurance exercises to enhance performance. In this experiment, we investigate the effectiveness of the widely used and recommended self-regulation strategy of if-then planning (i.e., implementation intentions in down-regulating RPE and improving performance in a static muscular endurance task. 62 female students (age: M = 23.7 years, SD = 4.0 were randomly assigned to an implementation intention or a control condition and performed a static muscular endurance task. They held two intertwined rings as long as possible while avoiding contacts between the rings. In the implementation intention condition, participants had an if-then plan: “If the task becomes too strenuous for me, then I ignore the strain and tell myself: Keep going!” Every 25 ± 10 s participants reported their RPE along with their perceived pain. Endurance performance was measured as time to failure, along with contact errors as a measure of performance quality. No differences emerged between implementation intention and control participants regarding time to failure and performance quality. However, mixed-effects model analyses revealed a significant Time-to-Failure × Condition interaction for RPE. Compared to the control condition, participants in the implementation intention condition reported substantially greater increases in RPE during the second half of the task and reached higher total values of RPE before task termination. A similar but weaker pattern evinced for perceived pain. Our results demonstrate that RPE during an endurance task can be self-regulated with if
Approximate and renormgroup symmetries
Energy Technology Data Exchange (ETDEWEB)
Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling
2009-07-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximate and renormgroup symmetries
International Nuclear Information System (INIS)
Ibragimov, Nail H.; Kovalev, Vladimir F.
2009-01-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximations of Fuzzy Systems
Directory of Open Access Journals (Sweden)
Vinai K. Singh
2013-03-01
Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions
Potvin, Guy
2015-10-01
We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
International Nuclear Information System (INIS)
Knobloch, A.F.
1980-01-01
A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
On Covering Approximation Subspaces
Directory of Open Access Journals (Sweden)
Xun Ge
2009-06-01
Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
Prestack wavefield approximations
Alkhalifah, Tariq
2013-01-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg
2005-01-01
The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...
Approximation by Cylinder Surfaces
DEFF Research Database (Denmark)
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
Battista, Michael T.
1999-01-01
Because traditional instruction ignores students' personal construction of mathematical meaning, mathematical thought development is not properly nurtured. Several issues must be addressed, including adults' ignorance of math- and student-learning processes, identification of math-education research specialists, the myth of coverage, testing…
Persistence of Memory for Ignored Lists of Digits: Areas of Developmental Constancy and Change.
Cowan, Nelson; Nugent, Lara D.; Elliott, Emily M.; Saults, J. Scott
2000-01-01
Examined persistence of sensory memory by studying developmental differences in recall of attended and ignored lists of digits for second-graders, fifth-graders, and adults. Found developmental increase in the persistence of memory only for the final item in an ignored list, which is the item for which sensory memory is thought to be the most…
Modelling non-ignorable missing data mechanisms with item response theory models
Holman, Rebecca; Glas, Cornelis A.W.
2005-01-01
A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled
Modelling non-ignorable missing-data mechanisms with item response theory models
Holman, Rebecca; Glas, Cees A. W.
2005-01-01
A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled
Is There Such a Thing as 'White Ignorance' in British Education?
Bain, Zara
2018-01-01
I argue that political philosopher Charles W. Mills' twin concepts of 'the epistemology of ignorance' and 'white ignorance' are useful tools for thinking through racial injustice in the British education system. While anti-racist work in British education has a long history, racism persists in British primary, secondary and tertiary education. For…
IgM nephropathy; can we still ignore it.
Vanikar, Aruna
2013-04-01
IgM nephropathy (IgMN) is a relatively less recognized clinico-immunopathological entity in the domain of glomerulonephritis , often thought to be a bridge between minimal change disease and focal segmental glomerulosclerosis. Directory of Open Access Journals (DOAJ), Google Scholar, Pubmed (NLM), LISTA (EBSCO) and Web of Science has been searched. IgM nephropathy can present as nephritic syndrome or less commonly with subnephrotic proteinuria or rarely hematuria. About 30% patients respond to steroids whereas others are steroid dependent / resistant. They should be given a trial of Rituximab or stem cell therapy. IgM nephropathy (IgMN) is an important and rather neglected pathology responsible for renal morbidity in children and adults in developing countries as compared to developed nations with incidence of 2-18.5% of native biopsies. Abnormal T-cell function with hyperfunctioning suppressor T-cells are believed to be responsible for this disease entity. Approximately one third of the patients are steroid responders where as the remaining two thirds are steroid resistant or dependent. Therapeutic trials including cell therapies targeting suppressor T-cells are required.
The concept of ignorance in a risk assessment and risk management context
International Nuclear Information System (INIS)
Aven, T.; Steen, R.
2010-01-01
There are many definitions of ignorance in the context of risk assessment and risk management. Most refer to situations in which there are lack of knowledge, poor basis for probability assignments and possible outcomes not (fully) known. The purpose of this paper is to discuss the ignorance concept in this setting. Based on a set of risk and uncertainty features, we establish conceptual structures characterising the level of ignorance. These features include the definition of chances (relative frequency-interpreted probabilities) and the existence of scientific uncertainties. Based on these structures, we suggest a definition of ignorance linked to scientific uncertainties, i.e. the lack of understanding of how consequences of the activity are influenced by the underlying factors. In this way, ignorance can be viewed as a condition for applying the precautionary principle. The discussion is also linked to the use and boundaries of risk assessments in the case of large uncertainties, and the methods for classifying risk and uncertainty problems.
An improved saddlepoint approximation.
Gillespie, Colin S; Renshaw, Eric
2007-08-01
Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Topology, calculus and approximation
Komornik, Vilmos
2017-01-01
Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Approximating Preemptive Stochastic Scheduling
Megow Nicole; Vredeveld Tjark
2009-01-01
We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...
Optimization and approximation
Pedregal, Pablo
2017-01-01
This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.
Cyclic approximation to stasis
Directory of Open Access Journals (Sweden)
Stewart D. Johnson
2009-06-01
Full Text Available Neighborhoods of points in $mathbb{R}^n$ where a positive linear combination of $C^1$ vector fields sum to zero contain, generically, cyclic trajectories that switch between the vector fields. Such points are called stasis points, and the approximating switching cycle can be chosen so that the timing of the switches exactly matches the positive linear weighting. In the case of two vector fields, the stasis points form one-dimensional $C^1$ manifolds containing nearby families of two-cycles. The generic case of two flows in $mathbb{R}^3$ can be diffeomorphed to a standard form with cubic curves as trajectories.
International Nuclear Information System (INIS)
El Sawi, M.
1983-07-01
A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)
The relaxation time approximation
International Nuclear Information System (INIS)
Gairola, R.P.; Indu, B.D.
1991-01-01
A plausible approximation has been made to estimate the relaxation time from a knowledge of the transition probability of phonons from one state (r vector, q vector) to other state (r' vector, q' vector), as a result of collision. The relaxation time, thus obtained, shows a strong dependence on temperature and weak dependence on the wave vector. In view of this dependence, relaxation time has been expressed in terms of a temperature Taylor's series in the first Brillouin zone. Consequently, a simple model for estimating the thermal conductivity is suggested. the calculations become much easier than the Callaway model. (author). 14 refs
Polynomial approximation on polytopes
Totik, Vilmos
2014-01-01
Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
The random phase approximation
International Nuclear Information System (INIS)
Schuck, P.
1985-01-01
RPA is the adequate theory to describe vibrations of the nucleus of very small amplitudes. These vibrations can either be forced by an external electromagnetic field or can be eigenmodes of the nucleus. In a one dimensional analogue the potential corresponding to such eigenmodes of very small amplitude should be rather stiff otherwise the motion risks to be a large amplitude one and to enter a region where the approximation is not valid. This means that nuclei which are supposedly well described by RPA must have a very stable groundstate configuration (must e.g. be very stiff against deformation). This is usually the case for doubly magic nuclei or close to magic nuclei which are in the middle of proton and neutron shells which develop a very stable groundstate deformation; we take the deformation as an example but there are many other possible degrees of freedom as, for example, compression modes, isovector degrees of freedom, spin degrees of freedom, and many more
The quasilocalized charge approximation
International Nuclear Information System (INIS)
Kalman, G J; Golden, K I; Donko, Z; Hartmann, P
2005-01-01
The quasilocalized charge approximation (QLCA) has been used for some time as a formalism for the calculation of the dielectric response and for determining the collective mode dispersion in strongly coupled Coulomb and Yukawa liquids. The approach is based on a microscopic model in which the charges are quasilocalized on a short-time scale in local potential fluctuations. We review the conceptual basis and theoretical structure of the QLC approach and together with recent results from molecular dynamics simulations that corroborate and quantify the theoretical concepts. We also summarize the major applications of the QLCA to various physical systems, combined with the corresponding results of the molecular dynamics simulations and point out the general agreement and instances of disagreement between the two
Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.
2016-01-01
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948
Ignoring the Obvious: Combined Arms and Fire and Maneuver Tactics Prior to World War I
National Research Council Canada - National Science Library
Bruno, Thomas
2002-01-01
The armies that entered WWI ignored many pre-war lessons though WWI armies later developed revolutionary tactical-level advances, scholars claim that this tactical evolution followed an earlier period...
Approximate quantum Markov chains
Sutter, David
2018-01-01
This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...
DEFF Research Database (Denmark)
Hyldgaard, Kirsten
2006-01-01
Psychoanalysis has nothing to say about education. Psychoanalysis has something to say about pedagogy; psychoanalysis has pedagogical-philosophical implications. Pedagogy, in distinction to education, addresses the question of the subject. This implies that pedagogical theory is and cannot be a s...
1996-01-01
The Rajiv Gandhi Foundation (RGF), together with the AIMS-affiliated NGO AIDS Cell, Delhi, held a workshop as part of an effort to raise a 90-doctor RGF AIDS workforce which will work together with nongovernmental organizations on AIDS prevention, control, and management. 25 general practitioners registered with the Indian Medical Council, who have practiced medicine in Delhi for the past 10-20 years, responded to a pre-program questionnaire on HIV-related knowledge and attitudes. 6 out of the 25 physicians did not know what the acronym AIDS stands for, extremely low awareness of the clinical aspects of the disease was revealed, 9 believed in the conspiracy theory of HIV development and accidental release by the US Central Intelligence Agency, 8 believed that AIDS is a problem of only the promiscuous, 18 did not know that the mode of HIV transmission is similar to that of the hepatitis B virus, 12 were unaware that HIV-infected people will test HIV-seronegative during the first three months after initial infection and that they will develop symptoms of full-blown AIDS only after 10 years, 10 did not know the name of even one drug used to treat the disease, 3 believed aspirin to be an effective drug against AIDS, many believed fantastic theories about the modes of HIV transmission, and many were acutely homophobic. Efforts were made to clear misconceptions about HIV during the workshop. It is hoped that participating doctors' attitudes about AIDS and the high-risk groups affected by it were also improved.
Self-similar factor approximants
International Nuclear Information System (INIS)
Gluzman, S.; Yukalov, V.I.; Sornette, D.
2003-01-01
The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties
Mid-adolescent neurocognitive development of ignoring and attending emotional stimuli
Directory of Open Access Journals (Sweden)
Nora C. Vetter
2015-08-01
Full Text Available Appropriate reactions toward emotional stimuli depend on the distribution of prefrontal attentional resources. In mid-adolescence, prefrontal top-down control systems are less engaged, while subcortical bottom-up emotional systems are more engaged. We used functional magnetic resonance imaging to follow the neural development of attentional distribution, i.e. attending versus ignoring emotional stimuli, in adolescence. 144 healthy adolescents were studied longitudinally at age 14 and 16 while performing a perceptual discrimination task. Participants viewed two pairs of stimuli – one emotional, one abstract – and reported on one pair whether the items were the same or different, while ignoring the other pair. Hence, two experimental conditions were created: “attending emotion/ignoring abstract” and “ignoring emotion/attending abstract”. Emotional valence varied between negative, positive, and neutral. Across conditions, reaction times and error rates decreased and activation in the anterior cingulate and inferior frontal gyrus increased from age 14 to 16. In contrast, subcortical regions showed no developmental effect. Activation of the anterior insula increased across ages for attending positive and ignoring negative emotions. Results suggest an ongoing development of prefrontal top-down resources elicited by emotional attention from age 14 to 16 while activity of subcortical regions representing bottom-up processing remains stable.
Investigating Deviance Distraction and the Impact of the Modality of the To-Be-Ignored Stimuli.
Marsja, Erik; Neely, Gregory; Ljungberg, Jessica K
2018-03-01
It has been suggested that deviance distraction is caused by unexpected sensory events in the to-be-ignored stimuli violating the cognitive system's predictions of incoming stimuli. The majority of research has used methods where the to-be-ignored expected (standards) and the unexpected (deviants) stimuli are presented within the same modality. Less is known about the behavioral impact of deviance distraction when the to-be-ignored stimuli are presented in different modalities (e.g., standard and deviants presented in different modalities). In three experiments using cross-modal oddball tasks with mixed-modality to-be-ignored stimuli, we examined the distractive role of unexpected auditory deviants presented in a continuous stream of expected standard vibrations. The results showed that deviance distraction seems to be dependent upon the to-be-ignored stimuli being presented within the same modality, and that the simplest omission of something expected; in this case, a standard vibration may be enough to capture attention and distract performance.
Non-ignorable missingness item response theory models for choice effects in examinee-selected items.
Liu, Chen-Wei; Wang, Wen-Chung
2017-11-01
Examinee-selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non-ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two-dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non-ignorable and to determine how to apply the new model to the data collected. Two follow-up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non-ignorable missing data were mistakenly treated as ignorable. © 2017 The British Psychological Society.
International Conference Approximation Theory XV
Schumaker, Larry
2017-01-01
These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...
Ignorance Is Bliss, But for Whom? The Persistent Effect of Good Will on Cooperation
Directory of Open Access Journals (Sweden)
Mike Farjam
2016-10-01
Full Text Available Who benefits from the ignorance of others? We address this question from the point of view of a policy maker who can induce some ignorance into a system of agents competing for resources. Evolutionary game theory shows that when unconditional cooperators or ignorant agents compete with defectors in two-strategy settings, unconditional cooperators get exploited and are rendered extinct. In contrast, conditional cooperators, by utilizing some kind of reciprocity, are able to survive and sustain cooperation when competing with defectors. We study how cooperation thrives in a three-strategy setting where there are unconditional cooperators, conditional cooperators and defectors. By means of simulation on various kinds of graphs, we show that conditional cooperators benefit from the existence of unconditional cooperators in the majority of cases. However, in worlds that make cooperation hard to evolve, defectors benefit.
Roles of dark energy perturbations in dynamical dark energy models: can we ignore them?
Park, Chan-Gyung; Hwang, Jai-chan; Lee, Jae-heon; Noh, Hyerim
2009-10-09
We show the importance of properly including the perturbations of the dark energy component in the dynamical dark energy models based on a scalar field and modified gravity theories in order to meet with present and future observational precisions. Based on a simple scaling scalar field dark energy model, we show that observationally distinguishable substantial differences appear by ignoring the dark energy perturbation. By ignoring it the perturbed system of equations becomes inconsistent and deviations in (gauge-invariant) power spectra depend on the gauge choice.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-01
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Forms of Approximate Radiation Transport
Brunner, G
2002-01-01
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
Exact constants in approximation theory
Korneichuk, N
1991-01-01
This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base
International Conference Approximation Theory XIV
Schumaker, Larry
2014-01-01
This volume developed from papers presented at the international conference Approximation Theory XIV, held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.
Forgotten and Ignored: Special Education in First Nations Schools in Canada
Phillips, Ron
2010-01-01
Usually reviews of special education in Canada describe the special education programs, services, policies, and legislation that are provided by the provinces and territories. The reviews consistently ignore the special education programs, services, policies, and legislation that are provided by federal government of Canada. The federal government…
The Capital Costs Conundrum: Why Are Capital Costs Ignored and What Are the Consequences?
Winston, Gordon C.
1993-01-01
Colleges and universities historically have ignored the capital costs associated with institutional administration in their estimates of overall and per-student costs. This neglect leads to distortion of data, misunderstandings, and uninformed decision making. The real costs should be recognized in institutional accounting. (MSE)
Monitoring your friends, not your foes: strategic ignorance and the delegation of real authority
Dominguez-Martinez, S.; Sloof, R.; von Siemens, F.
2010-01-01
In this laboratory experiment we study the use of strategic ignorance to delegate real authority within a firm. A worker can gather information on investment projects, while a manager makes the implementation decision. The manager can monitor the worker. This allows her to better exploit the
Monitored by your friends, not your foes: Strategic ignorance and the delegation of real authority
Dominguez-Martinez, S.; Sloof, R.; von Siemens, F.
2012-01-01
In this laboratory experiment we study the use of strategic ignorance to delegate real authority within a firm. A worker can gather information on investment projects, while a manager makes the implementation decision. The manager can monitor the worker. This allows her to exploit any information
Mathematical Practice as Sculpture of Utopia: Models, Ignorance, and the Emancipated Spectator
Appelbaum, Peter
2012-01-01
This article uses Ranciere's notion of the ignorant schoolmaster and McElheny's differentiation of artist's models from those of the architect and scientist to propose the reconceptualization of mathematics education as the support of emancipated spectators and sculptors of utopia.
The effects of systemic crises when investors can be crisis ignorant
H.J.W.G. Kole (Erik); C.G. Koedijk (Kees); M.J.C.M. Verbeek (Marno)
2004-01-01
textabstractSystemic crises can largely affect asset allocations due to the rapid deterioration of the risk-return trade-off. We investigate the effects of systemic crises, interpreted as global simultaneous shocks to financial markets, by introducing an investor adopting a crisis ignorant or crisis
Geographies of knowing, geographies of ignorance: jumping scale in Southeast Asia
van Schendel, W.
2002-01-01
'Area studies' use a geographical metaphor to visualise and naturalise particular social spaces as well as a particular scale of analysis. They produce specific geographies of knowing but also create geographies of ignorance. Taking Southeast Asia as an example, in this paper I explore how areas are
The Trust Game Behind the Veil of Ignorance : A Note on Gender Differences
Vyrastekova, J.; Onderstal, A.M.
2005-01-01
We analyse gender differences in the trust game in a "behind the veil of ignorance" design.This method yields strategies that are consistent with actions observed in the classical trust game experiments.We observe that, on averge, men and women do not differ in "trust", and that women are slightly
The trust game behind the veil of ignorance: A note on gender differences
Vyrastekova, J.; Onderstal, S.
2008-01-01
We analyze gender differences in the trust game in a "behind the veil of ignorance" design. This method yields strategies that are consistent with actions observed in the classical trust game experiments. We observe that, on average, men and women do not differ in "trust", and that women are
The Ignorant Facilitator: Education, Politics and Theatre in Co-Communities
Lev-Aladgem, Shulamith
2015-01-01
This article discusses the book "The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation" by the French philosopher, Jacques Rancière. Its intention is to study the potential contribution of this text to the discourse of applied theatre (theatre in co-communities) in general, and the role of the facilitator in particular. It…
Ignoring Memory Hints: The Stubborn Influence of Environmental Cues on Recognition Memory
Selmeczy, Diana; Dobbins, Ian G.
2017-01-01
Recognition judgments can benefit from the use of environmental cues that signal the general likelihood of encountering familiar versus unfamiliar stimuli. While incorporating such cues is often adaptive, there are circumstances (e.g., eyewitness testimony) in which observers should fully ignore environmental cues in order to preserve memory…
Uncertain Climate Forecasts From Multimodel Ensembles: When to Use Them and When to Ignore Them
Jewson, Stephen; Rowlands, Dan
2010-01-01
Uncertainty around multimodel ensemble forecasts of changes in future climate reduces the accuracy of those forecasts. For very uncertain forecasts this effect may mean that the forecasts should not be used. We investigate the use of the well-known Bayesian Information Criterion (BIC) to make the decision as to whether a forecast should be used or ignored.
Inattentional blindness for ignored words: comparison of explicit and implicit memory tasks.
Butler, Beverly C; Klein, Raymond
2009-09-01
Inattentional blindness is described as the failure to perceive a supra-threshold stimulus when attention is directed away from that stimulus. Based on performance on an explicit recognition memory test and concurrent functional imaging data Rees, Russell, Frith, and Driver [Rees, G., Russell, C., Frith, C. D., & Driver, J. (1999). Inattentional blindness versus inattentional amnesia for fixated but ignored words. Science, 286, 2504-2507] reported inattentional blindness for word stimuli that were fixated but ignored. The present study examined both explicit and implicit memory for fixated but ignored words using a selective-attention task in which overlapping picture/word stimuli were presented at fixation. No explicit awareness of the unattended words was apparent on a recognition memory test. Analysis of an implicit memory task, however, indicated that unattended words were perceived at a perceptual level. Thus, the selective-attention task did not result in perfect filtering as suggested by Rees et al. While there was no evidence of conscious perception, subjects were not blind to the implicit perceptual properties of fixated but ignored words.
Parabolic approximation method for fast magnetosonic wave propagation in tokamaks
International Nuclear Information System (INIS)
Phillips, C.K.; Perkins, F.W.; Hwang, D.Q.
1985-07-01
Fast magnetosonic wave propagation in a cylindrical tokamak model is studied using a parabolic approximation method in which poloidal variations of the wave field are considered weak in comparison to the radial variations. Diffraction effects, which are ignored by ray tracing mthods, are included self-consistently using the parabolic method since continuous representations for the wave electromagnetic fields are computed directly. Numerical results are presented which illustrate the cylindrical convergence of the launched waves into a diffraction-limited focal spot on the cyclotron absorption layer near the magnetic axis for a wide range of plasma confinement parameters
Some results in Diophantine approximation
DEFF Research Database (Denmark)
Pedersen, Steffen Højris
the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...
Limitations of shallow nets approximation.
Lin, Shao-Bo
2017-10-01
In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Spherical Approximation on Unit Sphere
Directory of Open Access Journals (Sweden)
Eman Samir Bhaya
2018-01-01
Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of functions in spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in spaces for by modulus of smoothness of functions.
Double jeopardy, the equal value of lives and the veil of ignorance: a rejoinder to Harris.
McKie, J; Kuhse, H; Richardson, J; Singer, P
1996-08-01
Harris levels two main criticisms against our original defence of QALYs (Quality Adjusted Life Years). First, he rejects the assumption implicit in the QALY approach that not all lives are of equal value. Second, he rejects our appeal to Rawls's veil of ignorance test in support of the QALY method. In the present article we defend QALYs against Harris's criticisms. We argue that some of the conclusions Harris draws from our view that resources should be allocated on the basis of potential improvements in quality of life and quantity of life are erroneous, and that others lack the moral implications Harris claims for them. On the other hand, we defend our claim that a rational egoist, behind a veil of ignorance, could consistently choose to allocate life-saving resources in accordance with the QALY method, despite Harris's claim that a rational egoist would allocate randomly if there is no better than a 50% chance of being the recipient.
Crimes commited by indigeno us people in ignorance of the law
Directory of Open Access Journals (Sweden)
Diego Fernando Chimbo Villacorte
2017-07-01
Full Text Available This analysis focuses specifically When the Indian commits crimes in ignorance of the law, not only because it ignores absolutely the unlawfulness of their conduct but when he believes he is acting in strict accordance with their beliefs and ancestral customs which –in squabble some cases– with positive law. Likewise the impossibility of imposing a penalty –when the offense is committed outside the community– or indigenous purification –when it marks an act that disturbs social peace within the indigenous– community is committed but mainly focuses on the impossibility to impose a security measure when it has committed a crime outside their community, because doing so is as unimpeachable and returns to his community, generating a discriminating treatment that prevents the culturally different self-determination.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
2017-09-01
McCarthy, J. (1980). Circumscription - A Form of Nonmonotonic Reasoning. Artificial Intelligence , 13, 27–39. McClure, S., Scambray, J., & Kurtz, G. (2012...THREAT TO CYBERSECURITY : HOW GROUP PROCESS AND IGNORANCE AFFECT ANALYST ACCURACY AND PROMPTITUDE by Ryan F. Kelly September 2017...September 2017 3. REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE THE INSIDER THREAT TO CYBERSECURITY : HOW GROUP PROCESS AND
Geographies of knowing, geographies of ignorance: jumping scale in Southeast Asia
van Schendel, W.
2002-01-01
'Area studies' use a geographical metaphor to visualise and naturalise particular social spaces as well as a particular scale of analysis. They produce specific geographies of knowing but also create geographies of ignorance. Taking Southeast Asia as an example, in this paper I explore how areas are imagined and how area knowledge is structured to construct area 'heartlands' as well as area `borderlands'. This is illustrated by considering a large region of Asia (here named Zomiatf) that did ...
Harvey, Marc
2014-09-01
This paper proposes a model of human uniqueness based on an unusual distinction between two contrasted kinds of political competition and political status: (1) antagonistic competition, in quest of dominance (antagonistic status), a zero-sum, self-limiting game whose stake--who takes what, when, how--summarizes a classical definition of politics (Lasswell 1936), and (2) synergistic competition, in quest of merit (synergistic status), a positive-sum, self-reinforcing game whose stake becomes "who brings what to a team's common good." In this view, Rawls's (1971) famous virtual "veil of ignorance" mainly conceals politics' antagonistic stakes so as to devise the principles of a just, egalitarian society, yet without providing any means to enforce these ideals (Sen 2009). Instead, this paper proposes that human uniqueness flourished under a real "adapted veil of ignorance" concealing the steady inflation of synergistic politics which resulted from early humans' sturdy egalitarianism. This proposition divides into four parts: (1) early humans first stumbled on a purely cultural means to enforce a unique kind of within-team antagonistic equality--dyadic balanced deterrence thanks to handheld weapons (Chapais 2008); (2) this cultural innovation is thus closely tied to humans' darkest side, but it also launched the cumulative evolution of humans' brightest qualities--egalitarian team synergy and solidarity, together with the associated synergistic intelligence, culture, and communications; (3) runaway synergistic competition for differential merit among antagonistically equal obligate teammates is the single politically selective mechanism behind the cumulative evolution of all these brighter qualities, but numerous factors to be clarified here conceal this mighty evolutionary driver; (4) this veil of ignorance persists today, which explains why humans' unique prosocial capacities are still not clearly understood by science. The purpose of this paper is to start lifting
Shepherd, Steven; Kay, Aaron C
2012-02-01
How do people cope when they feel uninformed or unable to understand important social issues, such as the environment, energy concerns, or the economy? Do they seek out information, or do they simply ignore the threatening issue at hand? One would intuitively expect that a lack of knowledge would motivate an increased, unbiased search for information, thereby facilitating participation and engagement in these issues-especially when they are consequential, pressing, and self-relevant. However, there appears to be a discrepancy between the importance/self-relevance of social issues and people's willingness to engage with and learn about them. Leveraging the literature on system justification theory (Jost & Banaji, 1994), the authors hypothesized that, rather than motivating an increased search for information, a lack of knowledge about a specific sociopolitical issue will (a) foster feelings of dependence on the government, which will (b) increase system justification and government trust, which will (c) increase desires to avoid learning about the relevant issue when information is negative or when information valence is unknown. In other words, the authors suggest that ignorance-as a function of the system justifying tendencies it may activate-may, ironically, breed more ignorance. In the contexts of energy, environmental, and economic issues, the authors present 5 studies that (a) provide evidence for this specific psychological chain (i.e., ignorance about an issue → dependence → government trust → avoidance of information about that issue); (b) shed light on the role of threat and motivation in driving the second and third links in this chain; and (c) illustrate the unfortunate consequences of this process for individual action in those contexts that may need it most.
Fallon, SJ; Mattiesing, RM; Dolfen, N; Manohar, SGM; Husain, M
2017-01-01
Ignoring distracting information and updating current contents are essential components of working memory (WM). Yet, although both require controlling irrelevant information, it is unclear whether they have the same effects on recall and produce the same level of misbinding errors (incorrectly joining the features of different memoranda). Moreover, the likelihood of misbinding may be affected by the feature similarity between the items already encoded into memory and the information that has ...
New Tests of the Fixed Hotspot Approximation
Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.
2005-05-01
We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of
Ignoring alarming news brings indifference: Learning about the world and the self.
Paluck, Elizabeth Levy; Shafir, Eldar; Wu, Sherry Jueyu
2017-10-01
The broadcast of media reports about moral crises such as famine can subtly depress rather than activate moral concern. Whereas much research has examined the effects of media reports that people attend to, social psychological analysis suggests that what goes unattended can also have an impact. We test the idea that when vivid news accounts of human suffering are broadcast in the background but ignored, people infer from their choice to ignore these accounts that they care less about the issue, compared to those who pay attention and even to those who were not exposed. Consistent with research on self-perception and attribution, three experiments demonstrate that participants who were nudged to distract themselves in front of a television news program about famine in Niger (Study 1), or to skip an online promotional video for the Niger famine program (Study 2), or who chose to ignore the famine in Niger television program in more naturalistic settings (Study 3) all assigned lower importance to poverty and to hunger reduction compared to participants who watched with no distraction or opportunity to skip the program, or to those who did not watch at all. Copyright © 2017 Elsevier B.V. All rights reserved.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
The efficiency of Flory approximation
International Nuclear Information System (INIS)
Obukhov, S.P.
1984-01-01
The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)
Approximate Implicitization Using Linear Algebra
Directory of Open Access Journals (Sweden)
Oliver J. D. Barrowclough
2012-01-01
Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.
Rollout sampling approximate policy iteration
Dimitrakakis, C.; Lagoudakis, M.G.
2008-01-01
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Framework for sequential approximate optimization
Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.
2004-01-01
An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python
Nuclear Hartree-Fock approximation testing and other related approximations
International Nuclear Information System (INIS)
Cohenca, J.M.
1970-01-01
Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt
Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya
2013-09-01
Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can
Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial
Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew
2011-01-01
This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817
Burden of Circulatory System Diseases and Ignored Barriers ofKnowledge Translation
Directory of Open Access Journals (Sweden)
Hamed-Basir Ghafouri
2012-10-01
Full Text Available Circulatory system disease raise third highest disability-adjusted life years among Iranians and ischemic cardiac diseases are main causes for such burden. Despite available evidences on risk factors of the disease, no effective intervention was implemented to control and prevent the disease. This paper non-systematically reviews available literature on the problem, solutions, and barriers of implementation of knowledge translation in Iran. It seems that there are ignored factors such as cultural and motivational issues in knowledge translation interventions but there are hopes for implementation of started projects and preparation of students as next generation of knowledge transferors.
Mes chers collègues, les moines, ou le partage de l’ignorance
Directory of Open Access Journals (Sweden)
Laurence Caillet
2009-03-01
Full Text Available Mes chers collègues, les moines, ou le partage de l’ignorance. Aucun statut ne m’a autant étonnée que celui de collègue qui me fut conféré par les moines du Grand monastère de l’Est, à Nara. Après avoir testé mes connaissances en matière de rituel, ces moines fort savants manifestèrent en effet, avec ostentation, leur ignorance. Pointant pour moi des détails liturgiques qu’ils tenaient pour incompréhensibles, ils prirent un plaisir évident à bavarder histoire et théologie, comme si je pouvais apporter quoi que ce soit. Cette mise en scène du caractère incompréhensible du rituel soulignait le caractère ineffable de cérémonies jadis accomplies au ciel par des entités supérieures. Je fournissais prétexte à décrire la vanité de l’érudition face à l’accomplissement des mystères et aussi l’importance de cette érudition pour renouer avec un sens originel irrémédiablement inconnaissable.My dear colleagues the monks, or the sharing of ignorance. No status has ever surprised me as much as that of “colleague” conferred on me by the monks of the Great Eastern Monastery of Nara. After testing my knowledge of ritual, these very learned monks made great show of their ignorance. Drawing my attention to liturgical details that they held to be incomprehensible, they took obvious pleasure in chatting about history and theology, as if I were capable of making the slightest contribution. This staging of the impenetrable nature of the ritual highlighted the ineffable character of the ceremonies performed in heaven long ago by superior beings. I provided a convenient pretext for describing the vanity of erudition in the face of the accomplishment of the mysteries, and also the importance of this erudition for renewing an original, irreparably unknowable meaning.
Shearlets and Optimally Sparse Approximations
DEFF Research Database (Denmark)
Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q
2012-01-01
Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....
Diophantine approximation and Dirichlet series
Queffélec, Hervé
2013-01-01
This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...
Approximations to camera sensor noise
Jin, Xiaodan; Hirakawa, Keigo
2013-02-01
Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.
Rational approximations for tomographic reconstructions
International Nuclear Information System (INIS)
Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas
2013-01-01
We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
Approximate reasoning in physical systems
International Nuclear Information System (INIS)
Mutihac, R.
1991-01-01
The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)
Face Recognition using Approximate Arithmetic
DEFF Research Database (Denmark)
Marso, Karol
Face recognition is image processing technique which aims to identify human faces and found its use in various diﬀerent ﬁelds for example in security. Throughout the years this ﬁeld evolved and there are many approaches and many diﬀerent algorithms which aim to make the face recognition as eﬀective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....
Approximate Reanalysis in Topology Optimization
DEFF Research Database (Denmark)
Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole
2009-01-01
In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...
Approximate Matching of Hierarchial Data
DEFF Research Database (Denmark)
Augsten, Nikolaus
-grams of a tree are all its subtrees of a particular shape. Intuitively, two trees are similar if they have many pq-grams in common. The pq-gram distance is an efficient and effective approximation of the tree edit distance. We analyze the properties of the pq-gram distance and compare it with the tree edit...
Approximation of Surfaces by Cylinders
DEFF Research Database (Denmark)
Randrup, Thomas
1998-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Approximation properties of haplotype tagging
Directory of Open Access Journals (Sweden)
Dreiseitl Stephan
2006-01-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.
All-Norm Approximation Algorithms
Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik
2002-01-01
A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation
Truthful approximations to range voting
DEFF Research Database (Denmark)
Filos-Ratsika, Aris; Miltersen, Peter Bro
We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...
On badly approximable complex numbers
DEFF Research Database (Denmark)
Esdahl-Schou, Rune; Kristensen, S.
We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...
Approximate reasoning in decision analysis
Energy Technology Data Exchange (ETDEWEB)
Gupta, M M; Sanchez, E
1982-01-01
The volume aims to incorporate the recent advances in both theory and applications. It contains 44 articles by 74 contributors from 17 different countries. The topics considered include: membership functions; composite fuzzy relations; fuzzy logic and inference; classifications and similarity measures; expert systems and medical diagnosis; psychological measurements and human behaviour; approximate reasoning and decision analysis; and fuzzy clustering algorithms.
Rational approximation of vertical segments
Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte
2007-08-01
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.
Pythagorean Approximations and Continued Fractions
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Ultrafast Approximation for Phylogenetic Bootstrap
Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and
Goold, S D
1996-01-01
Assuming that rationing health care is unavoidable, and that it requires moral reasoning, how should we allocate limited health care resources? This question is difficult because our pluralistic, liberal society has no consensus on a conception of distributive justice. In this article I focus on an alternative: Who shall decide how to ration health care, and how shall this be done to respect autonomy, pluralism, liberalism, and fairness? I explore three processes for making rationing decisions: cost-utility analysis, informed democratic decision making, and applications of the veil of ignorance. I evaluate these processes as examples of procedural justice, assuming that there is no outcome considered the most just. I use consent as a criterion to judge competing processes so that rationing decisions are, to some extent, self-imposed. I also examine the processes' feasibility in our current health care system. Cost-utility analysis does not meet criteria for actual or presumed consent, even if costs and health-related utility could be measured perfectly. Existing structures of government cannot creditably assimilate the information required for sound rationing decisions, and grassroots efforts are not representative. Applications of the veil of ignorance are more useful for identifying principles relevant to health care rationing than for making concrete rationing decisions. I outline a process of decision making, specifically for health care, that relies on substantive, selected representation, respects pluralism, liberalism, and deliberative democracy, and could be implemented at the community or organizational level.
Neumann, Ewald; Nkrumah, Ivy K; Chen, Zhe
2018-03-03
Experiments examining identity priming from attended and ignored novel words (words that are used only once except when repetition is required due to experimental manipulation) in a lexical decision task are reported. Experiment 1 tested English monolinguals whereas Experiment 2 tested Twi (a native language of Ghana, Africa)-English bilinguals. Participants were presented with sequential pairs of stimuli composed of a prime followed by a probe, with each containing two items. The participants were required to name the target word in the prime display, and to make a lexical decision to the target item in the probe display. On attended repetition (AR) trials the probe target item was identical to the target word on the preceding attentional display. On ignored repetition (IR) trials the probe target item was the same as the distractor word in the preceding attentional display. The experiments produced facilitated (positive) priming in the AR trials and delayed (negative) priming in the IR trials. Significantly, the positive and negative priming effects also replicated across both monolingual and bilingual groups of participants, despite the fact that the bilinguals were responding to the task in their non-dominant language.
Illiteracy, Ignorance, and Willingness to Quit Smoking among Villagers in India
Gorty, Prasad V. S. N. R.; Allam, Apparao
1992-01-01
During the field work to control oral cancer, difficulty in communication was encountered with illiterates. A study to define the role of illiteracy, ignorance and willingness to quit smoking among the villagers was undertaken in a rural area surrounding Doddipatla Village, A.P., India. Out of a total population of 3,550, 272 (7.7%) persons, mostly in the age range of 21–50 years, attended a cancer detection camp. There were 173 (63.6%) females and 99 (36.4%) males, among whom 66 (M53 + F13) were smokers; 36.4% of males and 63% of females were illiterate. Among the illiterates, it was observed that smoking rate was high (56%) and 47.7% were ignorant of health effects of smoking. The attitude of illiterate smokers was encouraging, as 83.6% were willing to quit smoking. Further research is necessary to design health education material for 413.5 million illiterates living in India (1991 Indian Census). A community health worker, trained in the use of mass media coupled with a person‐to‐person approach, may help the smoker to quit smoking. PMID:1506267
Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.
Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam
2018-06-01
The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.
Beyond the random phase approximation
DEFF Research Database (Denmark)
Olsen, Thomas; Thygesen, Kristian S.
2013-01-01
We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...
Hydrogen: Beyond the Classic Approximation
International Nuclear Information System (INIS)
Scivetti, Ivan
2003-01-01
The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position
Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
WKB approximation in atomic physics
International Nuclear Information System (INIS)
Karnakov, Boris Mikhailovich
2013-01-01
Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.
Spin masters how the media ignored the real news and helped reelect Barack Obama
Freddoso, David
2013-01-01
The biggest story of the election was how the media ignored the biggest story of the election.Amid all the breathless coverage of a non-existent War on Women, there was little or no coverage of Obama's war on the economy?how, for instance, part-time work is replacing full-time work; how low-wage jobs are replacing high-wage ones; how for Americans between the ages of 25 and 54 there are fewer jobs today than there were when the recession officially ended in 2009, and fewer, in fact, than at any time since mid-1997.The downsizing of the American economy wasn't the only stor
On Moderator Detection in Anchoring Research: Implications of Ignoring Estimate Direction
Directory of Open Access Journals (Sweden)
Nathan N. Cheek
2018-05-01
Full Text Available Anchoring, whereby judgments assimilate to previously considered standards, is one of the most reliable effects in psychology. In the last decade, researchers have become increasingly interested in identifying moderators of anchoring effects. We argue that a drawback of traditional moderator analyses in the standard anchoring paradigm is that they ignore estimate direction—whether participants’ estimates are higher or lower than the anchor value. We suggest that failing to consider estimate direction can sometimes obscure moderation in anchoring tasks, and discuss three potential analytic solutions that take estimate direction into account. Understanding moderators of anchoring effects is essential for a basic understanding of anchoring and for applied research on reducing the influence of anchoring in real-world judgments. Considering estimate direction reduces the risk of failing to detect moderation.
Effects of ignoring baseline on modeling transitions from intact cognition to dementia.
Yu, Lei; Tyas, Suzanne L; Snowdon, David A; Kryscio, Richard J
2009-07-01
This paper evaluates the effect of ignoring baseline when modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. Transitions among states are modeled by a discrete-time Markov chain having three transient (intact cognition, MCI, and GI) and two competing absorbing states (death and dementia). Transition probabilities depend on two covariates, age and the presence/absence of an apolipoprotein E-epsilon4 allele, through a multinomial logistic model with shared random effects. Results are illustrated with an application to the Nun Study, a cohort of 678 participants 75+ years of age at baseline and followed longitudinally with up to ten cognitive assessments per nun.
The wisdom of ignorant crowds: Predicting sport outcomes by mere recognition
Directory of Open Access Journals (Sweden)
Stefan M. Herzog
2011-02-01
Full Text Available that bets on the fact that people's recognition knowledge of names is a proxy for their competitiveness: In sports, it predicts that the better-known team or player wins a game. We present two studies on the predictive power of recognition in forecasting soccer games (World Cup 2006 and UEFA Euro 2008 and analyze previously published results. The performance of the collective recognition heuristic is compared to two benchmarks: predictions based on official rankings and aggregated betting odds. Across three soccer and two tennis tournaments, the predictions based on recognition performed similar to those based on rankings; when compared with betting odds, the heuristic fared reasonably well. Forecasts based on rankings---but not on betting odds---were improved by incorporating collective recognition information. We discuss the use of recognition for forecasting in sports and conclude that aggregating across individual ignorance spawns collective wisdom.
Approximate solutions to Mathieu's equation
Wilkinson, Samuel A.; Vogt, Nicolas; Golubev, Dmitry S.; Cole, Jared H.
2018-06-01
Mathieu's equation has many applications throughout theoretical physics. It is especially important to the theory of Josephson junctions, where it is equivalent to Schrödinger's equation. Mathieu's equation can be easily solved numerically, however there exists no closed-form analytic solution. Here we collect various approximations which appear throughout the physics and mathematics literature and examine their accuracy and regimes of applicability. Particular attention is paid to quantities relevant to the physics of Josephson junctions, but the arguments and notation are kept general so as to be of use to the broader physics community.
Approximate Inference for Wireless Communications
DEFF Research Database (Denmark)
Hansen, Morten
This thesis investigates signal processing techniques for wireless communication receivers. The aim is to improve the performance or reduce the computationally complexity of these, where the primary focus area is cellular systems such as Global System for Mobile communications (GSM) (and extensions...... to the optimal one, which usually requires an unacceptable high complexity. Some of the treated approximate methods are based on QL-factorization of the channel matrix. In the work presented in this thesis it is proven how the QL-factorization of frequency-selective channels asymptotically provides the minimum...
Quantum tunneling beyond semiclassical approximation
International Nuclear Information System (INIS)
Banerjee, Rabin; Majhi, Bibhas Ranjan
2008-01-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Generalized Gradient Approximation Made Simple
International Nuclear Information System (INIS)
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-01-01
Generalized gradient approximations (GGA close-quote s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. copyright 1996 The American Physical Society
Directory of Open Access Journals (Sweden)
Hulya CAGIRAN KENDIRLI
2014-12-01
According to result of research, there is no relationship between demographic specialties of students and ignored of food and quality legislation. But there is relationship between sexuality and ignored of food and quality legislation.
Impulse approximation in solid helium
International Nuclear Information System (INIS)
Glyde, H.R.
1985-01-01
The incoherent dynamic form factor S/sub i/(Q, ω) is evaluated in solid helium for comparison with the impulse approximation (IA). The purpose is to determine the Q values for which the IA is valid for systems such a helium where the atoms interact via a potential having a steeply repulsive but not infinite hard core. For 3 He, S/sub i/(Q, ω) is evaluated from first principles, beginning with the pair potential. The density of states g(ω) is evaluated using the self-consistent phonon theory and S/sub i/(Q,ω) is expressed in terms of g(ω). For solid 4 He resonable models of g(ω) using observed input parameters are used to evaluate S/sub i/(Q,ω). In both cases S/sub i/(Q, ω) is found to approach the impulse approximation S/sub IA/(Q, ω) closely for wave vector transfers Q> or approx. =20 A -1 . The difference between S/sub i/ and S/sub IA/, which is due to final state interactions of the scattering atom with the remainder of the atoms in the solid, is also predominantly antisymmetric in (ω-ω/sub R/), where ω/sub R/ is the recoil frequency. This suggests that the symmetrization procedure proposed by Sears to eliminate final state contributions should work well in solid helium
Finite approximations in fluid mechanics
International Nuclear Information System (INIS)
Hirschel, E.H.
1986-01-01
This book contains twenty papers on work which was conducted between 1983 and 1985 in the Priority Research Program ''Finite Approximations in Fluid Mechanics'' of the German Research Society (Deutsche Forschungsgemeinschaft). Scientists from numerical mathematics, fluid mechanics, and aerodynamics present their research on boundary-element methods, factorization methods, higher-order panel methods, multigrid methods for elliptical and parabolic problems, two-step schemes for the Euler equations, etc. Applications are made to channel flows, gas dynamical problems, large eddy simulation of turbulence, non-Newtonian flow, turbomachine flow, zonal solutions for viscous flow problems, etc. The contents include: multigrid methods for problems from fluid dynamics, development of a 2D-Transonic Potential Flow Solver; a boundary element spectral method for nonstationary viscous flows in 3 dimensions; navier-stokes computations of two-dimensional laminar flows in a channel with a backward facing step; calculations and experimental investigations of the laminar unsteady flow in a pipe expansion; calculation of the flow-field caused by shock wave and deflagration interaction; a multi-level discretization and solution method for potential flow problems in three dimensions; solutions of the conservation equations with the approximate factorization method; inviscid and viscous flow through rotating meridional contours; zonal solutions for viscous flow problems
Plasma Physics Approximations in Ares
International Nuclear Information System (INIS)
Managan, R. A.
2015-01-01
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A α (ζ ),A β (ζ ), ζ, f(ζ ) = (1 + e -μ/θ )F 1/2 (μ/θ), F 1/2 '/F 1/2 , F c α , and F c β . In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
Approximating the minimum cycle mean
Directory of Open Access Journals (Sweden)
Krishnendu Chatterjee
2013-07-01
Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.
Tidal Evolution of Asteroidal Binaries. Ruled by Viscosity. Ignorant of Rigidity
Efroimsky, Michael
2015-01-01
The rate of tidal evolution of asteroidal binaries is defined by the dynamical Love numbers divided by quality factors. Common is the (often illegitimate) approximation of the dynamical Love numbers with their static counterparts. As the static Love numbers are, approximately, proportional to the inverse rigidity, this renders a popular fallacy that the tidal evolution rate is determined by the product of the rigidity by the quality factor: $\\,k_l/Q\\propto 1/(\\mu Q)\\,$. In reality, the dynami...
Nonlinear approximation with dictionaries I. Direct estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
2004-01-01
We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...
Approximate cohomology in Banach algebras | Pourabbas ...
African Journals Online (AJOL)
We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...
Phonological processing of ignored distractor pictures, an fMRI investigation.
Bles, Mart; Jansma, Bernadette M
2008-02-11
Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction), or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures). Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.
Directory of Open Access Journals (Sweden)
Nathan John Grills
2015-01-01
Full Text Available Background. Nearly one-third of adults in India use tobacco, resulting in 1.2 million deaths. However, little is known about knowledge, attitudes, and practices (KAP related to smoking in the impoverished state of Uttarakhand. Methods. A cross-sectional epidemiological prevalence survey was undertaken. Multistage cluster sampling selected 20 villages and 50 households to survey from which 1853 people were interviewed. Tobacco prevalence and KAP were analyzed by income level, occupation, age, and sex. 95% confidence intervals were calculated using standard formulas and incorporating assumptions in relation to the clustering effect. Results. The overall prevalence of tobacco usage, defined using WHO criteria, was 38.9%. 93% of smokers and 86% of tobacco chewers were male. Prevalence of tobacco use, controlling for other factors, was associated with lower education, older age, and male sex. 97.6% of users and 98.1% of nonusers wanted less tobacco. Except for lung cancer (89% awareness, awareness of diseases caused by tobacco usage was low (cardiac: 67%; infertility: 32.5%; stroke: 40.5%. Conclusion. A dangerous combination of high tobacco usage prevalence, ignorance about its dangers, and few quit attempts being made suggests the need to develop effective and evidence based interventions to prevent a health and development disaster in Uttarakhand.
Reassessing insurers' access to genetic information: genetic privacy, ignorance, and injustice.
Feiring, Eli
2009-06-01
Many countries have imposed strict regulations on the genetic information to which insurers have access. Commentators have warned against the emerging body of legislation for different reasons. This paper demonstrates that, when confronted with the argument that genetic information should be available to insurers for health insurance underwriting purposes, one should avoid appeals to rights of genetic privacy and genetic ignorance. The principle of equality of opportunity may nevertheless warrant restrictions. A choice-based account of this principle implies that it is unfair to hold people responsible for the consequences of the genetic lottery, since we have no choice in selecting our genotype or the expression of it. However appealing, this view does not take us all the way to an adequate justification of inaccessibility of genetic information. A contractarian account, suggesting that health is a condition of opportunity and that healthcare is an essential good, seems more promising. I conclude that if or when predictive medical tests (such as genetic tests) are developed with significant actuarial value, individuals have less reason to accept as fair institutions that limit access to healthcare on the grounds of risk status. Given the assumption that a division of risk pools in accordance with a rough estimate of people's level of (genetic) risk will occur, fairness and justice favour universal health insurance based on solidarity.
Phonological processing of ignored distractor pictures, an fMRI investigation
Directory of Open Access Journals (Sweden)
Bles Mart
2008-02-01
Full Text Available Abstract Background Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction, or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures. Results Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Conclusion Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.
IGNORING CHILDREN'S BEDTIME CRYING: THE POWER OF WESTERN-ORIENTED BELIEFS.
Maute, Monique; Perren, Sonja
2018-03-01
Ignoring children's bedtime crying (ICBC) is an issue that polarizes parents as well as pediatricians. While most studies have focused on the effectiveness of sleep interventions, no study has yet questioned which parents use ICBC. Parents often find children's sleep difficulties to be very challenging, but factors such as the influence of Western approaches to infant care, stress, and sensitivity have not been analyzed in terms of ICBC. A sample of 586 parents completed a questionnaire to investigate the relationships between parental factors and the method of ICBC. Data were analyzed using structural equation modeling. Latent variables were used to measure parental stress (Parental Stress Scale; J.O. Berry & W.H. Jones, 1995), sensitivity (Situation-Reaction-Questionnaire; Y. Hänggi, K. Schweinberger, N. Gugger, & M. Perrez, 2010), Western-oriented parental beliefs (Rigidity), and children's temperament (Parenting Stress Index; H. Tröster & R.R. Abidin). ICBC was used by 32.6% (n = 191) of parents in this study. Parents' Western-oriented beliefs predicted ICBC. Attitudes such as feeding a child on a time schedule and not carrying it out to prevent dependence were associated with letting the child cry to fall asleep. Low-sensitivity parents as well as parents of children with a difficult temperament used ICBC more frequently. Path analysis shows that parental stress did not predict ICBC. The results suggest that ICBC has become part of Western childrearing tradition. © 2018 Michigan Association for Infant Mental Health.
Behavioural responses to human-induced change: Why fishing should not be ignored.
Diaz Pauli, Beatriz; Sih, Andrew
2017-03-01
Change in behaviour is usually the first response to human-induced environmental change and key for determining whether a species adapts to environmental change or becomes maladapted. Thus, understanding the behavioural response to human-induced changes is crucial in the interplay between ecology, evolution, conservation and management. Yet the behavioural response to fishing activities has been largely ignored. We review studies contrasting how fish behaviour affects catch by passive (e.g., long lines, angling) versus active gears (e.g., trawls, seines). We show that fishing not only targets certain behaviours, but it leads to a multitrait response including behavioural, physiological and life-history traits with population, community and ecosystem consequences. Fisheries-driven change (plastic or evolutionary) of fish behaviour and its correlated traits could impact fish populations well beyond their survival per se , affecting predation risk, foraging behaviour, dispersal, parental care, etc., and hence numerous ecological issues including population dynamics and trophic cascades . In particular, we discuss implications of behavioural responses to fishing for fisheries management and population resilience. More research on these topics, however, is needed to draw general conclusions, and we suggest fruitful directions for future studies.
Experimental amplification of an entangled photon: what if the detection loophole is ignored?
International Nuclear Information System (INIS)
Pomarico, Enrico; Sanguinetti, Bruno; Sekatski, Pavel; Zbinden, Hugo; Gisin, Nicolas
2011-01-01
The experimental verification of quantum features, such as entanglement, at large scales is extremely challenging because of environment-induced decoherence. Indeed, measurement techniques for demonstrating the quantumness of multiparticle systems in the presence of losses are difficult to define, and if they are not sufficiently accurate they can provide wrong conclusions. We present a Bell test where one photon of an entangled pair is amplified and then detected by threshold detectors, whose signals undergo postselection. The amplification is performed by a classical machine, which produces a fully separable micro-macro state. However, by adopting such a technique one can surprisingly observe a violation of the Clauser-Horne-Shimony-Holt inequality. This is due to the fact that ignoring the detection loophole opened by the postselection and the system losses can lead to misinterpretations, such as claiming micro-macro entanglement in a setup where evidently it is not present. By using threshold detectors and postselection, one can only infer the entanglement of the initial pair of photons, and so micro-micro entanglement, as is further confirmed by the violation of a nonseparability criterion for bipartite systems. How to detect photonic micro-macro entanglement in the presence of losses with the currently available technology remains an open question.
Commentary: Ignorance as Bias: Radiolab, Yellow Rain, and “The Fact of the Matter”
Directory of Open Access Journals (Sweden)
Paul Hillmer
2017-12-01
Full Text Available In 2012 the National Public Radio show “Radiolab” released a podcast (later broadcast on air essentially asserting that Hmong victims of a suspected chemical agent known as “yellow rain” were ignorant of their surroundings and the facts, and were merely victims of exposure, dysentery, tainted water, and other natural causes. Relying heavily on the work of Dr. Matthew Meselson, Dr. Thomas Seeley, and former CIA officer Merle Pribbenow, Radiolab asserted that Hmong victims mistook bee droppings, defecated en masse from flying Asian honey bees, as “yellow rain.” They brought their foregone conclusions to an interview with Eng Yang, a self-described yellow rain survivor, and his niece, memoirist Kao Kalia Yang, who served as translator. The interview went horribly wrong when their dogged belief in the “bee dung hypothesis” was met with stiff and ultimately impassioned opposition. Radiolab’s confirmation bias led them to dismiss contradictory scientific evidence and mislead their audience. While the authors remain agnostic about the potential use of yellow rain in Southeast Asia, they believe the evidence shows that further study is needed before a final conclusion can be reached.
Introduction to Methods of Approximation in Physics and Astronomy
van Putten, Maurice H. P. M.
2017-04-01
secular behavior. For instance, secular evolution of orbital parameters may derive from averaging over essentially periodic behavior on relatively short, orbital periods. When the original number of degrees of freedom is large, averaging over dynamical time scales may lead to a formulation in terms of a system in approximately thermodynamic equilibrium subject to evolution on a secular time scale by a regular or singular perturbation. In modern astrophysics and cosmology, gravitation is being probed across an increasingly broad range of scales and more accurately so than ever before. These observations probe weak gravitational interactions below what is encountered in our solar system by many orders of magnitude. These observations hereby probe (curved) spacetime at low energy scales that may reveal novel properties hitherto unanticipated in the classical vacuum of Newtonian mechanics and Minkowski spacetime. Dark energy and dark matter encountered on the scales of galaxies and beyond, therefore, may be, in part, revealing our ignorance of the vacuum at the lowest energy scales encountered in cosmology. In this context, our application of Newtonian mechanics to globular clusters, galaxies and cosmology is an approximation assuming a classical vacuum, ignoring the potential for hidden low energy scales emerging on cosmological scales. Given our ignorance of the latter, this poses a challenge in the potential for unknown systematic deviations. If of quantum mechanical origin, such deviations are often referred to as anomalies. While they are small in traditional, macroscopic Newtonian experiments in the laboratory, they same is not a given in the limit of arbitrarily weak gravitational interactions. We hope this selection of introductory material is useful and kindles the reader's interest to become a creative member of modern astrophysics and cosmology.
Directory of Open Access Journals (Sweden)
Gurutzeta Guillera-Arroita
Full Text Available In a recent paper, Welsh, Lindenmayer and Donnelly (WLD question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.
Ignorance is no excuse for directors minimizing information asymmetry affecting boards
Directory of Open Access Journals (Sweden)
Eythor Ivar Jonsson
2006-11-01
Full Text Available This paper looks at information asymmetry at the board level and how lack of information has played a part in undermining the power of the board of directors. Information is power, and at board level, information is essential to keep the board knowledgeable about the failures and successes of the organization that it is supposed to govern. Although lack of information has become a popular excuse for boards, the mantra could –and should –be changing to, “Ignorance is no excuse” (Mueller, 1993. This paper explores some of these information system solutions that have the aim of resolving some of the problems of information asymmetry. Furthermore, three case studies are used to explore the problem of asymmetric information at board level and the how the boards are trying to solve the problem. The focus of the discussion is to a describe how directors experience the information asymmetry and if they find it troublesome, b how important information is for the control and strategy role of the board and c find out how boards can minimize the problem of asymmetric information. The research is conducted through semi-structured interviews with directors, managers and accountants. This paper offers an interesting exploration into information, or the lack of information, at board level. It describes both from a theoretical and practical viewpoint the problem of information asymmetry at board level and how companies are trying to solve this problem. It is an issue that has only been lightly touched upon in the corporate governance literature but is likely to attract more attention and research in the future.
On the practice of ignoring center-patient interactions in evaluating hospital performance.
Varewyck, Machteld; Vansteelandt, Stijn; Eriksson, Marie; Goetghebeur, Els
2016-01-30
We evaluate the performance of medical centers based on a continuous or binary patient outcome (e.g., 30-day mortality). Common practice adjusts for differences in patient mix through outcome regression models, which include patient-specific baseline covariates (e.g., age and disease stage) besides center effects. Because a large number of centers may need to be evaluated, the typical model postulates that the effect of a center on outcome is constant over patient characteristics. This may be violated, for example, when some centers are specialized in children or geriatric patients. Including interactions between certain patient characteristics and the many fixed center effects in the model increases the risk for overfitting, however, and could imply a loss of power for detecting centers with deviating mortality. Therefore, we assess how the common practice of ignoring such interactions impacts the bias and precision of directly and indirectly standardized risks. The reassuring conclusion is that the common practice of working with the main effects of a center has minor impact on hospital evaluation, unless some centers actually perform substantially better on a specific group of patients and there is strong confounding through the corresponding patient characteristic. The bias is then driven by an interplay of the relative center size, the overlap between covariate distributions, and the magnitude of the interaction effect. Interestingly, the bias on indirectly standardized risks is smaller than on directly standardized risks. We illustrate our findings by simulation and in an analysis of 30-day mortality on Riksstroke. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
S. Q. Zhao
2009-08-01
Full Text Available Land use change is critical in determining the distribution, magnitude and mechanisms of terrestrial carbon budgets at the local to global scales. To date, almost all regional to global carbon cycle studies are driven by a static land use map or land use change statistics with decadal time intervals. The biases in quantifying carbon exchange between the terrestrial ecosystems and the atmosphere caused by using such land use change information have not been investigated. Here, we used the General Ensemble biogeochemical Modeling System (GEMS, along with consistent and spatially explicit land use change scenarios with different intervals (1 yr, 5 yrs, 10 yrs and static, respectively, to evaluate the impacts of land use change data frequency on estimating regional carbon sequestration in the southeastern United States. Our results indicate that ignoring the detailed fast-changing dynamics of land use can lead to a significant overestimation of carbon uptake by the terrestrial ecosystem. Regional carbon sequestration increased from 0.27 to 0.69, 0.80 and 0.97 Mg C ha^{−1} yr^{−1} when land use change data frequency shifting from 1 year to 5 years, 10 years interval and static land use information, respectively. Carbon removal by forest harvesting and prolonged cumulative impacts of historical land use change on carbon cycle accounted for the differences in carbon sequestration between static and dynamic land use change scenarios. The results suggest that it is critical to incorporate the detailed dynamics of land use change into local to global carbon cycle studies. Otherwise, it is impossible to accurately quantify the geographic distributions, magnitudes, and mechanisms of terrestrial carbon sequestration at the local to global scales.
Limitations of the paraxial Debye approximation.
Sheppard, Colin J R
2013-04-01
In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.
Regenwetter, Michel; Ho, Moon-Ho R.; Tsetlin, Ilia
2007-01-01
This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two…
Oros, Nicolas; Chiba, Andrea A.; Nitz, Douglas A.; Krichmar, Jeffrey L.
2014-01-01
Learning to ignore irrelevant stimuli is essential to achieving efficient and fluid attention, and serves as the complement to increasing attention to relevant stimuli. The different cholinergic (ACh) subsystems within the basal forebrain regulate attention in distinct but complementary ways. ACh projections from the substantia innominata/nucleus…
Castleden, Heather; Daley, Kiley; Sloan Morgan, Vanessa; Sylvestre, Paul
2013-01-01
Geography is a product of colonial processes, and in Canada, the exclusion from educational curricula of Indigenous worldviews and their lived realities has produced "geographies of ignorance". Transformative learning is an approach geographers can use to initiate changes in non-Indigenous student attitudes about Indigenous…
Tsevreni, Irida
2018-01-01
This paper presents an attempt to apply Jacques Rancière's emancipatory pedagogy of "the ignorant schoolmaster" to environmental education, which emphasises environmental ethics. The paper tells the story of a philosophy of nature project in the framework of an environmental adult education course at a Second Chance School in Greece,…
Kuhlicke, C.
2009-04-01
By definition natural disasters always contain a moment of surprise. Their occurrence is mostly unforeseen and unexpected. They hit people unprepared, overwhelm them and expose their helplessness. Yet, there is surprisingly little known on the reasons for their being surprised. Aren't natural disasters expectable and foreseeable after all? Aren't the return rates of most hazards well known and shouldn't people be better prepared? The central question of this presentation is hence: Why do natural disasters so often radically surprise people at all (and how can we explain this being surprised)? In the first part of the presentation, it is argued that most approaches to vulnerability are not able to grasp this moment of surprise. On the contrary, they have their strength in unravelling the expectable: A person who is marginalized or even oppressed in everyday life is also vulnerable during times of crisis and stress, at least this is the central assumption of most vulnerability studies. In the second part, an understanding of vulnerability is developed, which allows taking into account such radical surprises. First, two forms of the unknown are differentiated: An area of the unknown an actor is more or less aware of (ignorance), and an area, which is not even known to be not known (nescience). The discovery of the latter is mostly associated with a "radical surprise", since it is per definition impossible to prepare for it. Second, a definition of vulnerability is proposed, which allows capturing the dynamics of surprises: People are vulnerable when they discover their nescience exceeding by definition previously established routines, stocks of knowledge and resources—in a general sense their capacities—to deal with their physical and/or social environment. This definition explicitly takes the view of different actors serious and departs from their being surprised. In the third part findings of a case study are presented, the 2002 flood in Germany. It is shown
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Some relations between entropy and approximation numbers
Institute of Scientific and Technical Information of China (English)
郑志明
1999-01-01
A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.
Axiomatic Characterizations of IVF Rough Approximation Operators
Directory of Open Access Journals (Sweden)
Guangji Yu
2014-01-01
Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.
An approximation for kanban controlled assembly systems
Topan, E.; Avsar, Z.M.
2011-01-01
An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated
Operator approximant problems arising from quantum theory
Maher, Philip J
2017-01-01
This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.
Do Not ignore pulmonary hypertension any longer. It’s time to deal with it!
Directory of Open Access Journals (Sweden)
Ahmad Mirdamadi
2011-08-01
tromboembolic attacks in check.Then,It was time of revolution in pulmonary hypertension management,With the emergence of Advanced PH treatment science of medicine became able to seriously deal with PH.This new strategy were showed to be able preventing mortality in PH patients’(5,Figure1Prostacycline showed that it is possible to enhance PH patients’ chance of survival. Phosphodiasterase inhibitor drugs, which were used for treating impotency for a long time, were demonstrated to be effective for reducing pulmonary pressure. Eventually, endotheline receptors were targeted.By the advent of endothelin receptor blockers such as Brosentan, physicians’ chances of helping PH patients were further improved.Today, with advanced PH treatment, PH is not counted as before and the science of medicine as a failed discipline.It is important to not forgetting PH in patients,especially ill patients or intractable to traditional treatment, in surgery wards or obstetric,pediatric,internal medicine,ICU or CCU wards of hospitals. By timely diagnosis, it will be possible to control PH patients in an effective way and to enhance their chance of survival.So,It is time now to pay more attention to PH,Don’t ignore it any longer and it’s time to deal with it
Beneficiation and agglomeration of manganese ore fines (an area so important and yet so ignored)
Sane, R.
2018-01-01
Unpredictable changes in demand and prices varying from very attractive to depressing levels have thrown all Manganese ore mines out of normal operating gear. The supply has to be in time-bound fashion, of dependable quality and continuous. With setting-up of numerous small units and with existing ferro-alloy units, ore supply has become extremely sensitive issue. Due to unpredictable swing in price of Mn ore lumps, furnace operators found it economic and convenient to use fines, even at great risks to furnace equipment and operating persons and therefore risks & damages were conveniently & comfortably ignored. Beneficiation Cost(Operating) approx. - (ferruginous ore) - Roast reduction followed by magnetic separation route-particulars - Water 20/-, Power 490/-, Coal fines-675/-, OH-250/-totaling to Rs.1435/T. (Figures are based on actual data from investigations on Orissa & Karnataka sector ores). Feed Grade Mn- 28 to 32 %, Fe - 14 to 25 %, Concentrate (Beneficiated ore fines)- - Mn- 45 to 48 %, Fe - 6 to 8 %., Recovery - 35 %, Price of 28-30 % Mn ore fines = Rs. 2400/T, Cost of Concentrated fines (45/48% Mn grade) = Rs. 8300/T, Price of 47-48 % Mn Lumpy ore = Rs.11,000/T. Sintering Cost (Operating) - Approx-Rs.1195=00/T Sinter. Therefore cost of Sinter produced from beneficiated concentrate is 9130+1195 = Rs. 10325. The difference in cost of 48%Mn ore Lumps & 48%Mn sintered concentrate = 11000-10325 = Rs.675/T. The main purpose of this paper is to show that establishment of beneficiation unit & Sintering unit is economically feasible. There are many misconcepts, still prevailing, about use of Mn ore sinters. Few of the main misconcepts are- 1)Sinters bring no benefit - technical or economical.2) Sinters are very friable and disintegrate easily into high fines during handling/transportation. 3) Fines below 100 mesh cannot be sintered. 4) Silica increases to high level during sintering, resulting in to high slag volume thereby higher power consumption. All are false
Analysis of corrections to the eikonal approximation
Hebborn, C.; Capel, P.
2017-11-01
Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.
Tidal Evolution of Asteroidal Binaries. Ruled by Viscosity. Ignorant of Rigidity.
Efroimsky, Michael
2015-10-01
This is a pilot paper serving as a launching pad for study of orbital and spin evolution of binary asteroids. The rate of tidal evolution of asteroidal binaries is defined by the dynamical Love numbers kl divided by quality factors Q. Common in the literature is the (oftentimes illegitimate) approximation of the dynamical Love numbers with their static counterparts. Since the static Love numbers are, approximately, proportional to the inverse rigidity, this renders a popular fallacy that the tidal evolution rate is determined by the product of the rigidity by the quality factor: {k}l/Q\\propto 1/(μ Q). In reality, the dynamical Love numbers depend on the tidal frequency and all rheological parameters of the tidally perturbed body (not just rigidity). We demonstrate that in asteroidal binaries the rigidity of their components plays virtually no role in tidal friction and tidal lagging, and thereby has almost no influence on the intensity of tidal interactions (tidal torques, tidal dissipation, tidally induced changes of the orbit). A key quantity that overwhelmingly determines the tidal evolution is a product of the effective viscosity η by the tidal frequency χ . The functional form of the torque’s dependence on this product depends on who wins in the competition between viscosity and self-gravitation. Hence a quantitative criterion, to distinguish between two regimes. For higher values of η χ , we get {k}l/Q\\propto 1/(η χ ), {while} for lower values we obtain {k}l/Q\\propto η χ . Our study rests on an assumption that asteroids can be treated as Maxwell bodies. Applicable to rigid rocks at low frequencies, this approximation is used here also for rubble piles, due to the lack of a better model. In the future, as we learn more about mechanics of granular mixtures in a weak gravity field, we may have to amend the tidal theory with other rheological parameters, ones that do not show up in the description of viscoelastic bodies. This line of study provides
Ellenbogen, Mark A; Linnen, Anne-Marie; Cardoso, Christopher; Joober, Ridha
2013-03-01
The administration of oxytocin promotes prosocial behavior in humans. The mechanism by which this occurs is unknown, but it likely involves changes in social information processing. In a randomized placebo-controlled study, we examined the influence of intranasal oxytocin and placebo on the interference control component of inhibition (i.e. ability to ignore task-irrelevant information) in 102 participants using a negative affective priming task with sad, angry, and happy faces. In this task, participants are instructed to respond to a facial expression of emotion while simultaneously ignoring another emotional face. On the subsequent trial, the previously-ignored emotional valence may become the emotional valence of the target face. Inhibition is operationalized as the differential delay between responding to a previously-ignored emotional valence and responding to an emotional valence unrelated to the previous one. Although no main effect of drug administration on inhibition was observed, a drug × depressive symptom interaction (β = -0.25; t = -2.6, p < 0.05) predicted the inhibition of sad faces. Relative to placebo, participants with high depression scores who were administered oxytocin were unable to inhibit the processing of sad faces. There was no relationship between drug administration and inhibition among those with low depression scores. These findings are consistent with increasing evidence that oxytocin alters social information processing in ways that have both positive and negative social outcomes. Because elevated depression scores are associated with an increased risk for major depressive disorder, difficulties inhibiting mood-congruent stimuli following oxytocin administration may be associated with risk for depression. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spence, C; Ranson, J; Driver, J
2000-02-01
In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual lip-read information consistent with the relevant auditory stream, or meaningless "chewing" lip movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking lips were fixated than when chewing lips were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual lipreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual lip movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a difficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The implications for cross-modal links in spatial attention are discussed.
Born approximation to a perturbative numerical method for the solution of the Schroedinger equation
International Nuclear Information System (INIS)
Adam, Gh.
1978-01-01
A step function perturbative numerical method (SF-PN method) is developed for the solution of the Cauchy problem for the second order liniar differential equation in normal form. An important point stressed in the present paper, which seems to have been previously ignored in the literature devoted to the PN methods, is the close connection between the first order perturbation theory of the PN approach and the wellknown Born approximation, and, in general, the connection between the varjous orders of the PN corrections and the Neumann series. (author)
Mapping moveout approximations in TI media
Stovas, Alexey; Alkhalifah, Tariq Ali
2013-01-01
Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.
Analytical approximation of neutron physics data
International Nuclear Information System (INIS)
Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.
1984-01-01
The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy
A unified approach to the Darwin approximation
International Nuclear Information System (INIS)
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-01-01
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting
Mapping moveout approximations in TI media
Stovas, Alexey
2013-11-21
Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Bounded-Degree Approximations of Stochastic Networks
Energy Technology Data Exchange (ETDEWEB)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.
Spearing, Natalie M; Connelly, Luke B; Nghiem, Hong S; Pobereskin, Louis
2012-11-01
This study highlights the serious consequences of ignoring reverse causality bias in studies on compensation-related factors and health outcomes and demonstrates a technique for resolving this problem of observational data. Data from an English longitudinal study on factors, including claims for compensation, associated with recovery from neck pain (whiplash) after rear-end collisions are used to demonstrate the potential for reverse causality bias. Although it is commonly believed that claiming compensation leads to worse recovery, it is also possible that poor recovery may lead to compensation claims--a point that is seldom considered and never addressed empirically. This pedagogical study compares the association between compensation claiming and recovery when reverse causality bias is ignored and when it is addressed, controlling for the same observable factors. When reverse causality is ignored, claimants appear to have a worse recovery than nonclaimants; however, when reverse causality bias is addressed, claiming compensation appears to have a beneficial effect on recovery, ceteris paribus. To avert biased policy and judicial decisions that might inadvertently disadvantage people with compensable injuries, there is an urgent need for researchers to address reverse causality bias in studies on compensation-related factors and health. Copyright © 2012 Elsevier Inc. All rights reserved.
2009-12-01
], achieving an impressive collection of the properties of these variable stars. Outstanding sets of data like the one collected by Nicholls and her colleagues often offer guidance on how to solve a cosmic puzzle by narrowing down the plethora of possible explanations proposed by the theoreticians. In this case, however, the observations are incompatible with all the previously conceived models and re-open an issue that has been thoroughly debated. Thanks to this study, astronomers are now aware of their own "ignorance" - a genuine driver of the knowledge-seeking process, as the ancient Greek philosopher Socrates is said to have taught. "The newly gathered data show that pulsations are an extremely unlikely explanation for the additional variation," says team leader Peter Wood. "Another possible mechanism for producing luminosity variations in a star is to have the star itself move in a binary system. However, our observations are strongly incompatible with this hypothesis too." The team found from further analysis that whatever the cause of these unexplained variations is, it also causes the giant stars to eject mass either in clumps or as an expanding disc. "A Sherlock Holmes is needed to solve this very frustrating mystery," concludes Nicholls. Notes [1] Precise brightness measurements were made by the MACHO and OGLE collaborations, running on telescopes in Australia and Chile, respectively. The OGLE observations were made at the same time as the VLT observations. More information This research was presented in two papers: one appeared in the November issue of the Monthly Notices of the Royal Astronomical Society ("Long Secondary Periods in Variable Red Giants", by C. P. Nicholls et al.), and the other has just been published in the Astrophysical Journal ("Evidence for mass ejection associated with long secondary periods in red giants", by P. R. Wood and C. P. Nicholls). The team is composed of Christine P. Nicholls and Peter R. Wood (Research School of Astronomy and
Cosmological applications of Padé approximant
International Nuclear Information System (INIS)
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation
Cosmological applications of Padé approximant
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay
2017-02-13
In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Uniform analytic approximation of Wigner rotation matrices
Hoffmann, Scott E.
2018-02-01
We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.
Exact and approximate multiple diffraction calculations
International Nuclear Information System (INIS)
Alexander, Y.; Wallace, S.J.; Sparrow, D.A.
1976-08-01
A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation
Bent approximations to synchrotron radiation optics
International Nuclear Information System (INIS)
Heald, S.
1981-01-01
Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors
Local density approximations for relativistic exchange energies
International Nuclear Information System (INIS)
MacDonald, A.H.
1986-01-01
The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS
Directory of Open Access Journals (Sweden)
Kambo, N. S.
2012-11-01
Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.
Diagonal Pade approximations for initial value problems
International Nuclear Information System (INIS)
Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.
1987-06-01
Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab
Approximation properties of fine hyperbolic graphs
Indian Academy of Sciences (India)
2016-08-26
Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...
Approximation properties of fine hyperbolic graphs
Indian Academy of Sciences (India)
2010 Mathematics Subject Classification. 46L07. 1. Introduction. Given a countable discrete group G, some nice approximation properties for the reduced. C∗-algebras C∗ r (G) can give us the approximation properties of G. For example, Lance. [7] proved that the nuclearity of C∗ r (G) is equivalent to the amenability of G; ...
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-01-01
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
Simultaneous approximation in scales of Banach spaces
International Nuclear Information System (INIS)
Bramble, J.H.; Scott, R.
1978-01-01
The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods
Approximation algorithms for guarding holey polygons ...
African Journals Online (AJOL)
Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...
Efficient automata constructions and approximate automata
Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.
2008-01-01
In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern
Efficient automata constructions and approximate automata
Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.; Holub, J.; Zdárek, J.
2006-01-01
In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern
Spline approximation, Part 1: Basic methodology
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
Nonlinear approximation with general wave packets
DEFF Research Database (Denmark)
Borup, Lasse; Nielsen, Morten
2005-01-01
We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...
Quirks of Stirling's Approximation
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-06-23
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
Approximations for stop-loss reinsurance premiums
Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.
2005-01-01
Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are
Improved Dutch Roll Approximation for Hypersonic Vehicle
Directory of Open Access Journals (Sweden)
Liang-Liang Yin
2014-06-01
Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Hawk, Larry W; Yartz, Andrew R; Pelham, William E; Lock, Thomas M
2003-01-01
The present study investigated attentional modification of prepulse inhibition of startle among boys with and without attention-deficit hyperactivity disorder (ADHD). Two hypotheses were tested: (1) whether ADHD is associated with diminished prepulse inhibition during attended prestimuli, but not ignored prestimuli, and (2) whether methylphenidate selectively increases prepulse inhibition to attended prestimuli among boys with ADHD. Participants were 17 boys with ADHD and 14 controls. Participants completed a tone discrimination task in each of two sessions separated by 1 week. ADHD boys were administered methylphenidate (0.3 mg/kg) in one session and placebo in the other session in a randomized, double-blind fashion. During each series of 72 tones (75 dB; half 1200-Hz, half 400-Hz), participants were paid to attend to one pitch and ignore the other. Bilateral eyeblink electromyogram startle responses were recorded in response to acoustic probes (50-ms, 102-dB white noise) presented following the onset of two-thirds of tones, and during one-third of intertrial intervals. Relative to controls, boys with ADHD exhibited diminished prepulse inhibition 120 ms after onset of attended but not ignored prestimuli following placebo administration. Methylphenidate selectively increased prepulse inhibition to attended prestimuli at 120 ms among boys with ADHD to a level comparable to that of controls, who did not receive methylphenidate. These data are consistent with the hypothesis that ADHD involves diminished selective attention and suggest that methylphenidate ameliorates the symptoms of ADHD, at least in part, by altering an early attentional mechanism.
Directory of Open Access Journals (Sweden)
Joe Harris
2017-07-01
Full Text Available Employing the widely used ammonium carbonate diffusion method, we demonstrate that altering an extrinsic parameter—desiccator size—which is rarely detailed in publications, can alter the route of crystallization. Hexagonally packed assemblies of spherical magnesium-calcium carbonate particles or spherulitic aragonitic particles can be selectively prepared from the same initial reaction solution by simply changing the internal volume of the desiccator, thereby changing the rate of carbonate addition and consequently precursor formation. This demonstrates that it is not merely the quantity of an additive which can control particle morphogenesis and phase selectivity, but control of other often ignored parameters are vital to ensure adequate reproducibility.
Regression with Sparse Approximations of Data
DEFF Research Database (Denmark)
Noorzad, Pardis; Sturm, Bob L.
2012-01-01
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...
Conditional Density Approximations with Mixtures of Polynomials
DEFF Research Database (Denmark)
Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre
2015-01-01
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...
Hardness and Approximation for Network Flow Interdiction
Chestnut, Stephen R.; Zenklusen, Rico
2015-01-01
In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...
Approximation of the semi-infinite interval
Directory of Open Access Journals (Sweden)
A. McD. Mercer
1980-01-01
Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.
Mathematical analysis, approximation theory and their applications
Gupta, Vijay
2016-01-01
Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.
Rodolfo, Kelvin S; Siringan, Fernando P
2006-03-01
Land subsidence resulting from excessive extraction of groundwater is particularly acute in East Asian countries. Some Philippine government sectors have begun to recognise that the sea-level rise of one to three millimetres per year due to global warming is a cause of worsening floods around Manila Bay, but are oblivious to, or ignore, the principal reason: excessive groundwater extraction is lowering the land surface by several centimetres to more than a decimetre per year. Such ignorance allows the government to treat flooding as a lesser problem that can be mitigated through large infrastructural projects that are both ineffective and vulnerable to corruption. Money would be better spent on preventing the subsidence by reducing groundwater pumping and moderating population growth and land use, but these approaches are politically and psychologically unacceptable. Even if groundwater use is greatly reduced and enlightened land-use practices are initiated, natural deltaic subsidence and global sea-level rise will continue to aggravate flooding, although at substantially lower rates.
Bond, Alan; Morrison-Saunders, Angus; Gunn, Jill A E; Pope, Jenny; Retief, Francois
2015-03-15
In the context of continuing uncertainty, ambiguity and ignorance in impact assessment (IA) prediction, the case is made that existing IA processes are based on false 'normal' assumptions that science can solve problems and transfer knowledge into policy. Instead, a 'post-normal science' approach is needed that acknowledges the limits of current levels of scientific understanding. We argue that this can be achieved through embedding evolutionary resilience into IA; using participatory workshops; and emphasising adaptive management. The goal is an IA process capable of informing policy choices in the face of uncertain influences acting on socio-ecological systems. We propose a specific set of process steps to operationalise this post-normal science approach which draws on work undertaken by the Resilience Alliance. This process differs significantly from current models of IA, as it has a far greater focus on avoidance of, or adaptation to (through incorporating adaptive management subsequent to decisions), unwanted future scenarios rather than a focus on the identification of the implications of a single preferred vision. Implementing such a process would represent a culture change in IA practice as a lack of knowledge is assumed and explicit, and forms the basis of future planning activity, rather than being ignored. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Slavica Singer
2010-12-01
Full Text Available Using the model of entrepreneurial university, the paper presents major blockages (university’s own institutional rigidity, fragmented organization, lack of mutual trust between the business sector and universities, no real benchmarks, legal framework not supportive of opening the university to new initiatives in Triple Helix interactions in Croatia. Comparing identified blockages with expectations (multidimensional campus, cooperation with the business sector and other stakeholders in designing new educational and research programs expressed by HEIs in developed countries around the world (2008 EIU survey indicates new challenges for universities in developing countries. With Triple Helix approach, not confined within national borders, but as an international networking opportunity, these challenges can be seen as opportunities, otherwise they are threats. On the scale of ignoring, observing, participating and leading positive changes in its surroundings, for the purpose of measuring vitality of Triple Helix interactions, Croatian universities are located more between ignoring and observing position. To move them towards a leading position, coordinated and consistent policies are needed in order to focus on eliminating identified blockages. Universities should take the lead in this process; otherwise they are losing credibility as desired partners in developing space for Triple Helix interactions.
Evaluation of the successive approximations method for acoustic streaming numerical simulations.
Catarino, S O; Minas, G; Miranda, J M
2016-05-01
This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.
Multilevel weighted least squares polynomial approximation
Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren
2017-01-01
, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
Nonlinear Ritz approximation for Fredholm functionals
Directory of Open Access Journals (Sweden)
Mudhir A. Abdul Hussain
2015-11-01
Full Text Available In this article we use the modify Lyapunov-Schmidt reduction to find nonlinear Ritz approximation for a Fredholm functional. This functional corresponds to a nonlinear Fredholm operator defined by a nonlinear fourth-order differential equation.
Euclidean shortest paths exact or approximate algorithms
Li, Fajie
2014-01-01
This book reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. The coverage includes mathematical proofs for many of the given statements.
Square well approximation to the optical potential
International Nuclear Information System (INIS)
Jain, A.K.; Gupta, M.C.; Marwadi, P.R.
1976-01-01
Approximations for obtaining T-matrix elements for a sum of several potentials in terms of T-matrices for individual potentials are studied. Based on model calculations for S-wave for a sum of two separable non-local potentials of Yukawa type form factors and a sum of two delta function potentials, it is shown that the T-matrix for a sum of several potentials can be approximated satisfactorily over all the energy regions by the sum of T-matrices for individual potentials. Based on this, an approximate method for finding T-matrix for any local potential by approximating it by a sum of suitable number of square wells is presented. This provides an interesting way to calculate the T-matrix for any arbitary potential in terms of Bessel functions to a good degree of accuracy. The method is applied to the Saxon-Wood potentials and good agreement with exact results is found. (author)
Approximation for the adjoint neutron spectrum
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2002-01-01
The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)
Saddlepoint approximation methods in financial engineering
Kwok, Yue Kuen
2018-01-01
This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables. The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...
Methods of Fourier analysis and approximation theory
Tikhonov, Sergey
2016-01-01
Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.
Pion-nucleus cross sections approximation
International Nuclear Information System (INIS)
Barashenkov, V.S.; Polanski, A.; Sosnin, A.N.
1990-01-01
Analytical approximation of pion-nucleus elastic and inelastic interaction cross-section is suggested, with could be applied in the energy range exceeding several dozens of MeV for nuclei heavier than beryllium. 3 refs.; 4 tabs
APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION
Directory of Open Access Journals (Sweden)
Mădălina Roxana Buneci
2016-12-01
Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere
Steepest descent approximations for accretive operator equations
International Nuclear Information System (INIS)
Chidume, C.E.
1993-03-01
A necessary and sufficient condition is established for the strong convergence of the steepest descent approximation to a solution of equations involving quasi-accretive operators defined on a uniformly smooth Banach space. (author). 49 refs
Seismic wave extrapolation using lowrank symbol approximation
Fomel, Sergey
2012-04-30
We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.
An overview on Approximate Bayesian computation*
Directory of Open Access Journals (Sweden)
Baragatti Meïli
2014-01-01
Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.
Approximate Computing Techniques for Iterative Graph Algorithms
Energy Technology Data Exchange (ETDEWEB)
Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram
2017-12-18
Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.
Approximate Simulation of Acute Hypobaric Hypoxia with Normobaric Hypoxia
Conkin, J.; Wessel, J. H., III
2011-01-01
INTRODUCTION. Some manufacturers of reduced oxygen (O2) breathing devices claim a comparable hypobaric hypoxia (HH) training experience by providing F(sub I) O2 pO2) of the target altitude. METHODS. Literature from investigators and manufacturers indicate that these devices may not properly account for the 47 mmHg of water vapor partial pressure that reduces the inspired partial pressure of O2 (P(sub I) O2). Nor do they account for the complex reality of alveolar gas composition as defined by the Alveolar Gas Equation. In essence, by providing iso-pO2 conditions for normobaric hypoxia (NH) as for HH exposures the devices ignore P(sub A)O2 and P(sub A)CO2 as more direct agents to induce signs and symptoms of hypoxia during acute training exposures. RESULTS. There is not a sufficient integrated physiological understanding of the determinants of P(sub A)O2 and P(sub A)CO2 under acute NH and HH given the same hypoxic pO2 to claim a device that provides isohypoxia. Isohypoxia is defined as the same distribution of hypoxia signs and symptoms under any circumstances of equivalent hypoxic dose, and hypoxic pO2 is an incomplete hypoxic dose. Some devices that claim an equivalent HH experience under NH conditions significantly overestimate the HH condition, especially when simulating altitudes above 10,000 feet (3,048 m). CONCLUSIONS. At best, the claim should be that the devices provide an approximate HH experience since they only duplicate the ambient pO2 at sea level as at altitude (iso-pO2 machines). An approach to reduce the overestimation is to at least provide machines that create the same P(sub I)O2 (iso-P(sub I)O2 machines) conditions at sea level as at the target altitude, a simple software upgrade.
Approximative solutions of stochastic optimization problem
Czech Academy of Sciences Publication Activity Database
Lachout, Petr
2010-01-01
Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf
Lattice quantum chromodynamics with approximately chiral fermions
Energy Technology Data Exchange (ETDEWEB)
Hierl, Dieter
2008-05-15
In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
Stochastic quantization and mean field approximation
International Nuclear Information System (INIS)
Jengo, R.; Parga, N.
1983-09-01
In the context of the stochastic quantization we propose factorized approximate solutions for the Fokker-Planck equation for the XY and Zsub(N) spin systems in D dimensions. The resulting differential equation for a factor can be solved and it is found to give in the limit of t→infinity the mean field or, in the more general case, the Bethe-Peierls approximation. (author)
Polynomial approximation of functions in Sobolev spaces
International Nuclear Information System (INIS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces
Magnus approximation in the adiabatic picture
International Nuclear Information System (INIS)
Klarsfeld, S.; Oteo, J.A.
1991-01-01
A simple approximate nonperturbative method is described for treating time-dependent problems that works well in the intermediate regime far from both the sudden and the adiabatic limits. The method consists of applying the Magnus expansion after transforming to the adiabatic basis defined by the eigenstates of the instantaneous Hamiltonian. A few exactly soluble examples are considered in order to assess the domain of validity of the approximation. (author) 32 refs., 4 figs
Lattice quantum chromodynamics with approximately chiral fermions
International Nuclear Information System (INIS)
Hierl, Dieter
2008-05-01
In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the Θ + pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)
Approximating centrality in evolving graphs: toward sublinearity
Priest, Benjamin W.; Cybenko, George
2017-05-01
The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.
I Want to but I Won't: Pluralistic Ignorance Inhibits Intentions to Take Paternity Leave in Japan
Directory of Open Access Journals (Sweden)
Takeru Miyajima
2017-09-01
Full Text Available The number of male employees who take paternity leave in Japan has been low in past decades. However, the majority of male employees actually wish to take paternity leave if they were to have a child. Previous studies have demonstrated that the organizational climate in workplaces is the major determinant of male employees' use of family-friendly policies, because males are often stigmatized and fear receiving negative evaluation from others. While such normative pressure might be derived from prevailing social practices relevant to people's expectation of social roles (e.g., “Men make houses, women make homes”, these social practices are often perpetuated even after the majority of group members have ceased to support them. The perpetuation of this unpopular norm could be caused by the social psychological phenomenon of pluralistic ignorance. While researches have explored people's beliefs about gender roles from various perspectives, profound understanding of these beliefs regarding gender role norms, and the accuracy of others' beliefs remains to be attained. The current research examined the association between pluralistic ignorance and the perpetually low rates of taking paternity leave in Japan. Specifically, Study 1 (n = 299 examined Japanese male employees' (ages ranging from the 20 s to the 40 s attitudes toward paternity leave and to estimate attitudes of other men of the same age, as well as behavioral intentions (i.e., desire and willingness to take paternity leave if they had a child in the future. The results demonstrated that male employees overestimated other men's negative attitudes toward paternity leave. Moreover, those who had positive attitudes toward taking leave and attributed negative attitudes to others were less willing to take paternity leave than were those who had positive attitudes and believed others shared those attitudes, although there was no significant difference between their desires to take paternity
I Want to but I Won't: Pluralistic Ignorance Inhibits Intentions to Take Paternity Leave in Japan.
Miyajima, Takeru; Yamaguchi, Hiroyuki
2017-01-01
The number of male employees who take paternity leave in Japan has been low in past decades. However, the majority of male employees actually wish to take paternity leave if they were to have a child. Previous studies have demonstrated that the organizational climate in workplaces is the major determinant of male employees' use of family-friendly policies, because males are often stigmatized and fear receiving negative evaluation from others. While such normative pressure might be derived from prevailing social practices relevant to people's expectation of social roles (e.g., "Men make houses, women make homes"), these social practices are often perpetuated even after the majority of group members have ceased to support them. The perpetuation of this unpopular norm could be caused by the social psychological phenomenon of pluralistic ignorance. While researches have explored people's beliefs about gender roles from various perspectives, profound understanding of these beliefs regarding gender role norms, and the accuracy of others' beliefs remains to be attained. The current research examined the association between pluralistic ignorance and the perpetually low rates of taking paternity leave in Japan. Specifically, Study 1 ( n = 299) examined Japanese male employees' (ages ranging from the 20 s to the 40 s) attitudes toward paternity leave and to estimate attitudes of other men of the same age, as well as behavioral intentions (i.e., desire and willingness) to take paternity leave if they had a child in the future. The results demonstrated that male employees overestimated other men's negative attitudes toward paternity leave. Moreover, those who had positive attitudes toward taking leave and attributed negative attitudes to others were less willing to take paternity leave than were those who had positive attitudes and believed others shared those attitudes, although there was no significant difference between their desires to take paternity leave. Study 2 ( n
In the Casino of Life: Betting on Risks and Ignoring the Consequences of Climate Change and Hazards
Brosnan, D. M.
2016-12-01
Even faced with strong scientific evidence decision-makers cite uncertainty and delay actions. Scientists, confident in the quality of their science and acknowledging that uncertainty while present is low by scientific standards, become more frustrated as their information is ignored. Decreasing scientific uncertainty, a hallmark of long term studies e.g. IPCC reports does little to motivate decision-makers. Imperviousness to scientific data is prevalent across all scales. Municipalities prefer to spend millions of dollars on engineered solutions to climate change and hazards, even if science shows that they perform less well than nature-based ones and cost much more. California is known to be at risk from tsunamis generated by earthquakes off Alaska. A study using a 9.1 earthquake, similar to a 1965 event, calculated the immediate economic price tag in infrastructure loss and business interruption at 9.5billion. The exposure of Los Angeles/Long Beach port trade to damage and downtime exceeds 1.2billion; business interruption would triple the figure. Yet despite several excellent scientific studies, the State is ill prepared; investments in infrastructure commerce and conservation risk being literally washed away. Globally there is a 5-10% probability of an extreme geohazard, e.g, a Tambora like eruption, occurring in this century. With a "value of statistical life" of 2.2 million and population at 7 billion the risk for fatalities alone is 1.1-7billion per yr. But there is little interest in investing the $0.5-3.5 billion per year in volcano monitoring necessary to reduce fatalities and lower risks of global conflict, starvation, and societal destruction. More science and less uncertainty is clearly not the driver of action. But is speaking with certainty really the answer? Decision makers and scientists are in the same casino of life but rarely play at the same tables. Decision makers bet differently to scientists. To motivate action we need to be cognizant of
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
'LTE-diffusion approximation' for arc calculations
International Nuclear Information System (INIS)
Lowke, J J; Tanaka, M
2006-01-01
This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode
Semiclassical initial value approximation for Green's function.
Kay, Kenneth G
2010-06-28
A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.
Approximate Bayesian evaluations of measurement uncertainty
Possolo, Antonio; Bodnar, Olha
2018-04-01
The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.
Multilevel weighted least squares polynomial approximation
Haji-Ali, Abdul-Lateef
2017-06-30
Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.
Monin, Benoît; Norton, Michael I
2003-05-01
A 5-day field study (N = 415) during and right after a shower ban demonstrated multifaceted social projection and the tendency to draw personality inferences from simple behavior in a time of drastic consensus change. Bathers thought showering was more prevalent than did non-bathers (false consensus) and respondents consistently underestimated the prevalence of the desirable and common behavior--be it not showering during the shower ban or showering after the ban (uniqueness bias). Participants thought that bathers and non-bathers during the ban differed greatly in their general concern for the community, but self-reports demonstrated that this gap was illusory (false polarization). Finally, bathers thought other bathers cared less than they did, whereas non-bathers thought other non-bathers cared more than they did (pluralistic ignorance). The study captures the many biases at work in social perception in a time of social change.
Smooth function approximation using neural networks.
Ferrari, Silvia; Stengel, Robert F
2005-01-01
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
Modified semiclassical approximation for trapped Bose gases
International Nuclear Information System (INIS)
Yukalov, V.I.
2005-01-01
A generalization of the semiclassical approximation is suggested allowing for an essential extension of its region of applicability. In particular, it becomes possible to describe Bose-Einstein condensation of a trapped gas in low-dimensional traps and in traps of low confining dimensions, for which the standard semiclassical approximation is not applicable. The result of the modified approach is shown to coincide with purely quantum-mechanical calculations for harmonic traps, including the one-dimensional harmonic trap. The advantage of the semiclassical approximation is in its simplicity and generality. Power-law potentials of arbitrary powers are considered. The effective thermodynamic limit is defined for any confining dimension. The behavior of the specific heat, isothermal compressibility, and density fluctuations is analyzed, with an emphasis on low confining dimensions, where the usual semiclassical method fails. The peculiarities of the thermodynamic characteristics in the effective thermodynamic limit are discussed
The binary collision approximation: Background and introduction
International Nuclear Information System (INIS)
Robinson, M.T.
1992-08-01
The binary collision approximation (BCA) has long been used in computer simulations of the interactions of energetic atoms with solid targets, as well as being the basis of most analytical theory in this area. While mainly a high-energy approximation, the BCA retains qualitative significance at low energies and, with proper formulation, gives useful quantitative information as well. Moreover, computer simulations based on the BCA can achieve good statistics in many situations where those based on full classical dynamical models require the most advanced computer hardware or are even impracticable. The foundations of the BCA in classical scattering are reviewed, including methods of evaluating the scattering integrals, interaction potentials, and electron excitation effects. The explicit evaluation of time at significant points on particle trajectories is discussed, as are scheduling algorithms for ordering the collisions in a developing cascade. An approximate treatment of nearly simultaneous collisions is outlined and the searching algorithms used in MARLOWE are presented
Self-similar continued root approximants
International Nuclear Information System (INIS)
Gluzman, S.; Yukalov, V.I.
2012-01-01
A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.
Ancilla-approximable quantum state transformations
Energy Technology Data Exchange (ETDEWEB)
Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)
2015-04-15
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.
On Born approximation in black hole scattering
Batic, D.; Kelkar, N. G.; Nowakowski, M.
2011-12-01
A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordström and Reissner-Nordström-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes.
Ancilla-approximable quantum state transformations
International Nuclear Information System (INIS)
Blass, Andreas; Gurevich, Yuri
2015-01-01
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation
On transparent potentials: a Born approximation study
International Nuclear Information System (INIS)
Coudray, C.
1980-01-01
In the frame of the scattering inverse problem at fixed energy, a class of potentials transparent in Born approximation is obtained. All these potentials are spherically symmetric and are oscillating functions of the reduced radial variable. Amongst them, the Born approximation of the transparent potential of the Newton-Sabatier method is found. In the same class, quasi-transparent potentials are exhibited. Very general features of potentials transparent in Born approximation are then stated. And bounds are given for the exact scattering amplitudes corresponding to most of the potentials previously exhibited. These bounds, obtained at fixed energy, and for large values of the angular momentum, are found to be independent on the energy
The adiabatic approximation in multichannel scattering
International Nuclear Information System (INIS)
Schulte, A.M.
1978-01-01
Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)
Minimal entropy approximation for cellular automata
International Nuclear Information System (INIS)
Fukś, Henryk
2014-01-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)
Resummation of perturbative QCD by pade approximants
International Nuclear Information System (INIS)
Gardi, E.
1997-01-01
In this lecture I present some of the new developments concerning the use of Pade Approximants (PA's) for resuming perturbative series in QCD. It is shown that PA's tend to reduce the renormalization scale and scheme dependence as compared to truncated series. In particular it is proven that in the limit where the β function is dominated by the 1-loop contribution, there is an exact symmetry that guarantees invariance of diagonal PA's under changing the renormalization scale. In addition it is shown that in the large β 0 approximation diagonal PA's can be interpreted as a systematic method for approximating the flow of momentum in Feynman diagrams. This corresponds to a new multiple scale generalization of the Brodsky-Lepage-Mackenzie (BLM) method to higher orders. I illustrate the method with the Bjorken sum rule and the vacuum polarization function. (author)
Fast wavelet based sparse approximate inverse preconditioner
Energy Technology Data Exchange (ETDEWEB)
Wan, W.L. [Univ. of California, Los Angeles, CA (United States)
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Becker, Stephen P; Garner, Annie A; Tamm, Leanne; Antonini, Tanya N; Epstein, Jeffery N
2017-03-13
Sluggish cognitive tempo (SCT) symptoms are associated with social difficulties in children, though findings are mixed and many studies have used global measures of social impairment. The present study tested the hypothesis that SCT would be uniquely associated with aspects of social functioning characterized by withdrawal and isolation, whereas attention deficit/hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD) symptoms would be uniquely associated with aspects of social functioning characterized by inappropriate responding in social situations and active peer exclusion. Participants were 158 children (70% boys) between 7-12 years of age being evaluated for possible ADHD. Both parents and teachers completed measures of SCT, ADHD, ODD, and internalizing (anxiety/depression) symptoms. Parents also completed ratings of social engagement and self-control. Teachers also completed measures assessing asociality and exclusion, as well as peer ignoring and dislike. In regression analyses controlling for demographic characteristics and other psychopathology symptoms, parent-reported SCT symptoms were significantly associated with lower social engagement (e.g., starting conversations, joining activities). Teacher-reported SCT symptoms were significantly associated with greater asociality/withdrawal and ratings of more frequent ignoring by peers, as well as greater exclusion. ODD symptoms and ADHD hyperactive-impulsive symptoms were more consistently associated with other aspects of social behavior, including peer exclusion, being disliked by peers, and poorer self-control during social situations. Findings provide the clearest evidence to date that the social difficulties associated with SCT are primarily due to withdrawal, isolation, and low initiative in social situations. Social skills training interventions may be effective for children displaying elevated SCT symptomatology.
Perturbation expansions generated by an approximate propagator
International Nuclear Information System (INIS)
Znojil, M.
1987-01-01
Starting from a knowledge of an approximate propagator R at some trial energy guess E 0 , a new perturbative prescription for p-plet of bound states and of their energies is proposed. It generalizes the Rayleigh-Schroedinger (RS) degenerate perturbation theory to the nondiagonal operators R (eliminates a RS need of their diagnolisation) and defines an approximate Hamiltonian T by mere inversion. The deviation V of T from the exact Hamiltonian H is assumed small only after a substraction of a further auxiliary Hartree-Fock-like separable ''selfconsistent'' potential U of rank p. The convergence is illustrated numerically on the anharmonic oscillator example
Approximate Inference and Deep Generative Models
CERN. Geneva
2018-01-01
Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.
Unambiguous results from variational matrix Pade approximants
International Nuclear Information System (INIS)
Pindor, Maciej.
1979-10-01
Variational Matrix Pade Approximants are studied as a nonlinear variational problem. It is shown that although a stationary value of the Schwinger functional is a stationary value of VMPA, the latter has also another stationary value. It is therefore proposed that instead of looking for a stationary point of VMPA, one minimizes some non-negative functional and then one calculates VMPA at the point where the former has the absolute minimum. This approach, which we call the Method of the Variational Gradient (MVG) gives unambiguous results and is also shown to minimize a distance between the approximate and the exact stationary values of the Schwinger functional
Faster and Simpler Approximation of Stable Matchings
Directory of Open Access Journals (Sweden)
Katarzyna Paluch
2014-04-01
Full Text Available We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.
APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS
Directory of Open Access Journals (Sweden)
T. I. Aliev
2013-03-01
Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.
On the dipole approximation with error estimates
Boßmann, Lea; Grummt, Robert; Kolb, Martin
2018-01-01
The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
Hardness of approximation for strip packing
DEFF Research Database (Denmark)
Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin
2017-01-01
Strip packing is a classical packing problem, where the goal is to pack a set of rectangular objects into a strip of a given width, while minimizing the total height of the packing. The problem has multiple applications, for example, in scheduling and stock-cutting, and has been studied extensively......)-approximation by two independent research groups [FSTTCS 2016,WALCOM 2017]. This raises a questionwhether strip packing with polynomially bounded input data admits a quasi-polynomial time approximation scheme, as is the case for related twodimensional packing problems like maximum independent set of rectangles or two...
DEFF Research Database (Denmark)
Jervelund, Signe Smith; Maltesen, Thomas; Wimmelmann, Camilla Lawaetz
2017-01-01
AIMS: Suboptimal healthcare utilisation and lower satisfaction with the patient-doctor encounter among immigrants has been documented. Immigrants' lack of familiarity with the healthcare system has been proposed as an explanation for this. This study investigated whether a systematic delivery...
Chubb, John E.
2003-01-01
Argues that market-driven education (charter schools, vouchers) is the most effective, albeit overlooked, reform strategy since publication of "A Nation at Risk." Describes corresponding growth of for-profit school management. Offers several recommendations to improve effectiveness of market-based reforms, such as state' continuing…
International Nuclear Information System (INIS)
Roche, P.
1995-01-01
A detailed critique is offered of United Kingdom (UK) political policy with respect to the Non-Proliferation Treaty, an interim agreement valid while nuclear disarmament was supposed to occur, by a representative of Greenpeace, the anti-nuclear campaigning group. The author argues that the civil and military nuclear programmes are still firmly linked, and emphasises his opinions by quoting examples of how UK politicians have broken treaty obligations in order to pursue their own political, and in some cases financial, goals. It is argued that the treaty has failed to force nuclear countries to disarm because of its promoted civil nuclear power programmes. (U.K.)
DEFF Research Database (Denmark)
Ploug, Thomas; Holm, Søren
2014-01-01
Many ICT services require that users explicitly consent to conditions of use and policies for the protection of personal information. This consent may become 'routinised'. We define the concept of routinisation and investigate to what extent routinisation occurs as well as the factors influencing...... routinisation in a survey study of internet use. We show that routinisation is common and that it is influenced by factors including gender, age, educational level and average daily internet use. We further explore the reasons users provide for not reading conditions and policies and show that they can...
CSIR Research Space (South Africa)
Barnard, E
2009-11-01
Full Text Available The authors have previously argued that the infamous "No Free Lunch" theorem for supervised learning is a paradoxical result of a misleading choice of prior probabilities. Here, they provide more analysis of the dangers of uniform densities...
DEFF Research Database (Denmark)
Persson, Karl Gunnar; Sharp, Paul Richard
This paper argues that imperfectly informed consumers use simple signals to identify the characteristics of wine. The geographical denomination and vintage of a wine as well as the characteristics of a particular wine will be considered here. However, the specific characteristics of a wine...... are difficult to ascertain ex ante given the enormous product variety. The reputation of a denomination will thus be an important guide for consumers when assessing individual wines. Denomination reputation is a function of average quality as revealed by the past performance of producers. The impact of past...... performance increases over time, since producers consider improved average quality to be an important factor in enhancing the price, but this necessitates monitoring of members in the denomination. The market and pricing of Tuscan red wines provide a natural experiment because there are a number...
DEFF Research Database (Denmark)
Højbjerg, Erik
institutions alike. The logic seems to be that financially capable individuals will enjoy social and political inclusion as well as an ability to exercise a stronger influence in markets.The paper specifically contributes to our understanding of the governmentalization of the present by addressing how...... and political goals? The research question will be discussed in the context of financial literacy educational initiatives. In the aftermath of the 2008 global financial crisis, increasing the financial literacy of ordinary citizen-‐consumers has taken a prominent position among regulators and financial...... - at least in part - the corporate spread of financial literacy educational initiatives can be observed as a particular form of power at-a-distance. The focus is on the role of private enterprise in governmentalizing the‘business of life’ by establishing and mobilizing specific conceptual forms around...
DEFF Research Database (Denmark)
Højbjerg, Erik
2015-01-01
, increasing the financial literacy of ordinary citizen-consumers has taken a prominent position among regulators and financial institutions alike. The logic seems to be that financially capable individuals will enjoy social and political inclusion as well as an ability to exercise a stronger influence....... The focus is on the role of private enterprise in governmentalizing the business of life by establishing and mobilizing specific conceptual forms around which the life skills of the entrepreneurial self involves a responsibilization of the individual citizen-consumer....
DEFF Research Database (Denmark)
Maruyama, P. K.; Oliveira, G. M.; Ferreira, Célia Maria Dias
2013-01-01
Generalization prevails in flower-animal interactions, and although animal visitors are not equally effective pollinators, most interactions likely represent an important energy intake for the animal visitor. Hummingbirds are nectar-feeding specialists, and many tropical plants are specialized...... to increase the overall nectar availability. We showed that mean nectar offer, at the transect scale, was the only parameter related to hummingbird visitation frequency, more so than nectar offer at single flowers and at the plant scale, or pollination syndrome. Centrality indices, calculated using...... energy provided by non-ornithophilous plants may facilitate reproduction of truly ornithophilous flowers by attracting and maintaining hummingbirds in the area. This may promote asymmetric hummingbird-plant associations, i.e., pollination depends on floral traits adapted to hummingbird morphology...
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying
2015-01-01
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Large hierarchies from approximate R symmetries
International Nuclear Information System (INIS)
Kappl, Rolf; Ratz, Michael; Vaudrevange, Patrick K.S.
2008-12-01
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales. (orig.)
Approximate Networking for Universal Internet Access
Directory of Open Access Journals (Sweden)
Junaid Qadir
2017-12-01
Full Text Available Despite the best efforts of networking researchers and practitioners, an ideal Internet experience is inaccessible to an overwhelming majority of people the world over, mainly due to the lack of cost-efficient ways of provisioning high-performance, global Internet. In this paper, we argue that instead of an exclusive focus on a utopian goal of universally accessible “ideal networking” (in which we have a high throughput and quality of service as well as low latency and congestion, we should consider providing “approximate networking” through the adoption of context-appropriate trade-offs. In this regard, we propose to leverage the advances in the emerging trend of “approximate computing” that rely on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power, and performance efficiency of systems by orders of magnitude by embracing output errors in resilient applications. Furthermore, we propose to extend the dimensions of approximate computing towards various knobs available at network layers. Approximate networking can be used to provision “Global Access to the Internet for All” (GAIA in a pragmatically tiered fashion, in which different users around the world are provided a different context-appropriate (but still contextually functional Internet experience.
Uncertainty relations for approximation and estimation
Energy Technology Data Exchange (ETDEWEB)
Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)
2016-05-27
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.
Uncertainty relations for approximation and estimation
International Nuclear Information System (INIS)
Lee, Jaeha; Tsutsui, Izumi
2016-01-01
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.
Intrinsic Diophantine approximation on general polynomial surfaces
DEFF Research Database (Denmark)
Tiljeset, Morten Hein
2017-01-01
We study the Hausdorff measure and dimension of the set of intrinsically simultaneously -approximable points on a curve, surface, etc, given as a graph of integer polynomials. We obtain complete answers to these questions for algebraically “nice” manifolds. This generalizes earlier work done...
Perturbation of operators and approximation of spectrum
Indian Academy of Sciences (India)
outside the bounds of essential spectrum of A(x) can be approximated ... some perturbed discrete Schrödinger operators treating them as block ...... particular, one may think of estimating the spectrum and spectral gaps of Schrödinger.
Quasilinear theory without the random phase approximation
International Nuclear Information System (INIS)
Weibel, E.S.; Vaclavik, J.
1980-08-01
The system of quasilinear equations is derived without making use of the random phase approximation. The fluctuating quantities are described by the autocorrelation function of the electric field using the techniques of Fourier analysis. The resulting equations posses the necessary conservation properties, but comprise new terms which hitherto have been lost in the conventional derivations
Rational approximations and quantum algorithms with postselection
Mahadev, U.; de Wolf, R.
2015-01-01
We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We
Padé approximations and diophantine geometry.
Chudnovsky, D V; Chudnovsky, G V
1985-04-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves.
Approximate systems with confluent bonding mappings
Lončar, Ivan
2001-01-01
If X = {Xn, pnm, N} is a usual inverse system with confluent (monotone) bonding mappings, then the projections are confluent (monotone). This is not true for approximate inverse system. The main purpose of this paper is to show that the property of Kelley (smoothness) of the space Xn is a sufficient condition for the confluence (monotonicity) of the projections.
Function approximation with polynomial regression slines
International Nuclear Information System (INIS)
Urbanski, P.
1996-01-01
Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation
On the parametric approximation in quantum optics
Energy Technology Data Exchange (ETDEWEB)
D' Ariano, G.M.; Paris, M.G.A.; Sacchi, M.F. [Istituto Nazionale di Fisica Nucleare, Pavia (Italy); Pavia Univ. (Italy). Dipt. di Fisica ' Alessandro Volta'
1999-03-01
The authors perform the exact numerical diagonalization of Hamiltonians that describe both degenerate and nondegenerate parametric amplifiers, by exploiting the conservation laws pertaining each device. It is clarify the conditions under which the parametric approximation holds, showing that the most relevant requirements is the coherence of the pump after the interaction, rather than its un depletion.
On the parametric approximation in quantum optics
International Nuclear Information System (INIS)
D'Ariano, G.M.; Paris, M.G.A.; Sacchi, M.F.; Pavia Univ.
1999-01-01
The authors perform the exact numerical diagonalization of Hamiltonians that describe both degenerate and nondegenerate parametric amplifiers, by exploiting the conservation laws pertaining each device. It is clarify the conditions under which the parametric approximation holds, showing that the most relevant requirements is the coherence of the pump after the interaction, rather than its un depletion
Uniform semiclassical approximation for absorptive scattering systems
International Nuclear Information System (INIS)
Hussein, M.S.; Pato, M.P.
1987-07-01
The uniform semiclassical approximation of the elastic scattering amplitude is generalized to absorptive systems. An integral equation is derived which connects the absorption modified amplitude to the absorption free one. Division of the amplitude into a diffractive and refractive components is then made possible. (Author) [pt
Tension and Approximation in Poetic Translation
Al-Shabab, Omar A. S.; Baka, Farida H.
2015-01-01
Simple observation reveals that each language and each culture enjoys specific linguistic features and rhetorical traditions. In poetry translation difference and the resultant linguistic tension create a gap between Source Language and Target language, a gap that needs to be bridged by creating an approximation processed through the translator's…
Variational Gaussian approximation for Poisson data
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Quasiclassical approximation for ultralocal scalar fields
International Nuclear Information System (INIS)
Francisco, G.
1984-01-01
It is shown how to obtain the quasiclassical evolution of a class of field theories called ultralocal fields. Coherent states that follow the 'classical' orbit as defined by Klauder's weak corespondence principle and restricted action principle is explicitly shown to approximate the quantum evolutions as (h/2π) → o. (Author) [pt
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay; Jo, Seongil; Nott, David; Shoemaker, Christine; Tempone, Raul
2017-01-01
is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Multidimensional stochastic approximation using locally contractive functions
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Pade approximant calculations for neutron escape probability
International Nuclear Information System (INIS)
El Wakil, S.A.; Saad, E.A.; Hendi, A.A.
1984-07-01
The neutron escape probability from a non-multiplying slab containing internal source is defined in terms of a functional relation for the scattering function for the diffuse reflection problem. The Pade approximant technique is used to get numerical results which compare with exact results. (author)
Optical bistability without the rotating wave approximation
Energy Technology Data Exchange (ETDEWEB)
Sharaby, Yasser A., E-mail: Yasser_Sharaby@hotmail.co [Physics Department, Faculty of Applied Sciences, Suez Canal University, Suez (Egypt); Joshi, Amitabh, E-mail: ajoshi@eiu.ed [Department of Physics, Eastern Illinois University, Charleston, IL 61920 (United States); Hassan, Shoukry S., E-mail: Shoukryhassan@hotmail.co [Mathematics Department, College of Science, University of Bahrain, P.O. Box 32038 (Bahrain)
2010-04-26
Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.
Optical bistability without the rotating wave approximation
International Nuclear Information System (INIS)
Sharaby, Yasser A.; Joshi, Amitabh; Hassan, Shoukry S.
2010-01-01
Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
RATIONAL APPROXIMATIONS TO GENERALIZED HYPERGEOMETRIC FUNCTIONS.
Under weak restrictions on the various free parameters, general theorems for rational representations of the generalized hypergeometric functions...and certain Meijer G-functions are developed. Upon specialization, these theorems yield a sequency of rational approximations which converge to the
A rational approximation of the effectiveness factor
DEFF Research Database (Denmark)
Wedel, Stig; Luss, Dan
1980-01-01
A fast, approximate method of calculating the effectiveness factor for arbitrary rate expressions is presented. The method does not require any iterative or interpolative calculations. It utilizes the well known asymptotic behavior for small and large Thiele moduli to derive a rational function...
Decision-theoretic troubleshooting: Hardness of approximation
Czech Academy of Sciences Publication Activity Database
Lín, Václav
2014-01-01
Roč. 55, č. 4 (2014), s. 977-988 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Decision-theoretic troubleshooting * Hardness of approximation * NP-completeness Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.451, year: 2014
Approximate solution methods in engineering mechanics
International Nuclear Information System (INIS)
Boresi, A.P.; Cong, K.P.
1991-01-01
This is a short book of 147 pages including references and sometimes bibliographies at the end of each chapter, and subject and author indices at the end of the book. The test includes an introduction of 3 pages, 29 pages explaining approximate analysis, 41 pages on finite differences, 36 pages on finite elements, and 17 pages on specialized methods
Approximated solutions to Born-Infeld dynamics
Energy Technology Data Exchange (ETDEWEB)
Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
The Hartree-Fock seniority approximation
International Nuclear Information System (INIS)
Gomez, J.M.G.; Prieto, C.
1986-01-01
A new self-consistent method is used to take into account the mean-field and the pairing correlations in nuclei at the same time. We call it the Hartree-Fock seniority approximation, because the long-range and short-range correlations are treated in the frameworks of Hartree-Fock theory and the seniority scheme. The method is developed in detail for a minimum-seniority variational wave function in the coordinate representation for an effective interaction of the Skyrme type. An advantage of the present approach over the Hartree-Fock-Bogoliubov theory is the exact conservation of angular momentum and particle number. Furthermore, the computational effort required in the Hartree-Fock seniority approximation is similar to that ofthe pure Hartree-Fock picture. Some numerical calculations for Ca isotopes are presented. (orig.)
Analytical Ballistic Trajectories with Approximately Linear Drag
Directory of Open Access Journals (Sweden)
Giliam J. P. de Carpentier
2014-01-01
Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.
Simple Lie groups without the approximation property
DEFF Research Database (Denmark)
Haagerup, Uffe; de Laat, Tim
2013-01-01
For a locally compact group G, let A(G) denote its Fourier algebra, and let M0A(G) denote the space of completely bounded Fourier multipliers on G. The group G is said to have the Approximation Property (AP) if the constant function 1 can be approximated by a net in A(G) in the weak-∗ topology...... on the space M0A(G). Recently, Lafforgue and de la Salle proved that SL(3,R) does not have the AP, implying the first example of an exact discrete group without it, namely, SL(3,Z). In this paper we prove that Sp(2,R) does not have the AP. It follows that all connected simple Lie groups with finite center...
The optimal XFEM approximation for fracture analysis
International Nuclear Information System (INIS)
Jiang Shouyan; Du Chengbin; Ying Zongquan
2010-01-01
The extended finite element method (XFEM) provides an effective tool for analyzing fracture mechanics problems. A XFEM approximation consists of standard finite elements which are used in the major part of the domain and enriched elements in the enriched sub-domain for capturing special solution properties such as discontinuities and singularities. However, two issues in the standard XFEM should specially be concerned: efficient numerical integration methods and an appropriate construction of the blending elements. In the paper, an optimal XFEM approximation is proposed to overcome the disadvantage mentioned above in the standard XFEM. The modified enrichment functions are presented that can reproduced exactly everywhere in the domain. The corresponding FORTRAN program is developed for fracture analysis. A classic problem of fracture mechanics is used to benchmark the program. The results indicate that the optimal XFEM can alleviate the errors and improve numerical precision.
Approximated solutions to Born-Infeld dynamics
International Nuclear Information System (INIS)
Ferraro, Rafael; Nigro, Mauro
2016-01-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Traveltime approximations for inhomogeneous HTI media
Alkhalifah, Tariq Ali
2011-01-01
Traveltimes information is convenient for parameter estimation especially if the medium is described by an anisotropic set of parameters. This is especially true if we could relate traveltimes analytically to these medium parameters, which is generally hard to do in inhomogeneous media. As a result, I develop traveltimes approximations for horizontaly transversely isotropic (HTI) media as simplified and even linear functions of the anisotropic parameters. This is accomplished by perturbing the solution of the HTI eikonal equation with respect to η and the azimuthal symmetry direction (usually used to describe the fracture direction) from a generally inhomogeneous elliptically anisotropic background medium. The resulting approximations can provide accurate analytical description of the traveltime in a homogenous background compared to other published moveout equations out there. These equations will allow us to readily extend the inhomogenous background elliptical anisotropic model to an HTI with a variable, but smoothly varying, η and horizontal symmetry direction values. © 2011 Society of Exploration Geophysicists.
Approximate radiative solutions of the Einstein equations
International Nuclear Information System (INIS)
Kuusk, P.; Unt, V.
1976-01-01
In this paper the external field of a bounded source emitting gravitational radiation is considered. A successive approximation method is used to integrate the Einstein equations in Bondi's coordinates (Bondi et al, Proc. R. Soc.; A269:21 (1962)). A method of separation of angular variables is worked out and the approximate Einstein equations are reduced to key equations. The losses of mass, momentum, and angular momentum due to gravitational multipole radiation are found. It is demonstrated that in the case of proper treatment a real mass occurs instead of a mass aspect in a solution of the Einstein equations. In an appendix Bondi's new function is given in terms of sources. (author)
Nonlinear analysis approximation theory, optimization and applications
2014-01-01
Many of our daily-life problems can be written in the form of an optimization problem. Therefore, solution methods are needed to solve such problems. Due to the complexity of the problems, it is not always easy to find the exact solution. However, approximate solutions can be found. The theory of the best approximation is applicable in a variety of problems arising in nonlinear functional analysis and optimization. This book highlights interesting aspects of nonlinear analysis and optimization together with many applications in the areas of physical and social sciences including engineering. It is immensely helpful for young graduates and researchers who are pursuing research in this field, as it provides abundant research resources for researchers and post-doctoral fellows. This will be a valuable addition to the library of anyone who works in the field of applied mathematics, economics and engineering.
Analysing organic transistors based on interface approximation
International Nuclear Information System (INIS)
Akiyama, Yuto; Mori, Takehiko
2014-01-01
Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming
2013-01-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Fast Approximate Joint Diagonalization Incorporating Weight Matrices
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Yeredor, A.
2009-01-01
Roč. 57, č. 3 (2009), s. 878-891 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : autoregressive processes * blind source separation * nonstationary random processes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.212, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/tichavsky-fast approximate joint diagonalization incorporating weight matrices.pdf
Mean-field approximation minimizes relative entropy
International Nuclear Information System (INIS)
Bilbro, G.L.; Snyder, W.E.; Mann, R.C.
1991-01-01
The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach
On approximation of functions by product operators
Directory of Open Access Journals (Sweden)
Hare Krishna Nigam
2013-12-01
Full Text Available In the present paper, two quite new reults on the degree of approximation of a function f belonging to the class Lip(α,r, 1≤ r <∞ and the weighted class W(Lr,ξ(t, 1≤ r <∞ by (C,2(E,1 product operators have been obtained. The results obtained in the present paper generalize various known results on single operators.
Markdown Optimization via Approximate Dynamic Programming
Directory of Open Access Journals (Sweden)
Cos?gun
2013-02-01
Full Text Available We consider the markdown optimization problem faced by the leading apparel retail chain. Because of substitution among products the markdown policy of one product affects the sales of other products. Therefore, markdown policies for product groups having a significant crossprice elasticity among each other should be jointly determined. Since the state space of the problem is very huge, we use Approximate Dynamic Programming. Finally, we provide insights on the behavior of how each product price affects the markdown policy.
Solving Math Problems Approximately: A Developmental Perspective.
Directory of Open Access Journals (Sweden)
Dana Ganor-Stern
Full Text Available Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger than the exact answer and when it was far (vs. close from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Factorized Approximate Inverses With Adaptive Dropping
Czech Academy of Sciences Publication Activity Database
Kopal, Jiří; Rozložník, Miroslav; Tůma, Miroslav
2016-01-01
Roč. 38, č. 3 (2016), A1807-A1820 ISSN 1064-8275 R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : approximate inverses * incomplete factorization * Gram–Schmidt orthogonalization * preconditioned iterative methods Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016
Semiclassical approximation in Batalin-Vilkovisky formalism
International Nuclear Information System (INIS)
Schwarz, A.
1993-01-01
The geometry of supermanifolds provided with a Q-structure (i.e. with an odd vector field Q satisfying {Q, Q}=0), a P-structure (odd symplectic structure) and an S-structure (volume element) or with various combinations of these structures is studied. The results are applied to the analysis of the Batalin-Vilkovisky approach to the quantization of gauge theories. In particular the semiclassical approximation in this approach is expressed in terms of Reidemeister torsion. (orig.)
Approximation for limit cycles and their isochrons.
Demongeot, Jacques; Françoise, Jean-Pierre
2006-12-01
Local analysis of trajectories of dynamical systems near an attractive periodic orbit displays the notion of asymptotic phase and isochrons. These notions are quite useful in applications to biosciences. In this note, we give an expression for the first approximation of equations of isochrons in the setting of perturbations of polynomial Hamiltonian systems. This method can be generalized to perturbations of systems that have a polynomial integral factor (like the Lotka-Volterra equation).
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Approximate Inverse Preconditioners with Adaptive Dropping
Czech Academy of Sciences Publication Activity Database
Kopal, J.; Rozložník, Miroslav; Tůma, Miroslav
2015-01-01
Roč. 84, June (2015), s. 13-20 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GAP108/11/0853; GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : approximate inverse * Gram-Schmidt orthogonalization * incomplete decomposition * preconditioned conjugate gradient method * algebraic preconditioning * pivoting Subject RIV: BA - General Mathematics Impact factor: 1.673, year: 2015
Approximations and Implementations of Nonlinear Filtering Schemes.
1988-02-01
sias k an Ykar repctively the input and the output vectors. Asfold. First, there are intrinsic errors, due to explained in the previous section, the...e.g.[BV,P]). In the above example of a a-algebra, the distributive property SIA (S 2vS3) - (SIAS2)v(SIAS3) holds. A complete orthocomplemented...process can be approximated by a switched Control Systems: Stochastic Stability and parameter process depending on the aggregated slow Dynamic Relaibility
An analytical approximation for resonance integral
International Nuclear Information System (INIS)
Magalhaes, C.G. de; Martinez, A.S.
1985-01-01
It is developed a method which allows to obtain an analytical solution for the resonance integral. The problem formulation is completely theoretical and based in concepts of physics of general character. The analytical expression for integral does not involve any empiric correlation or parameter. Results of approximation are compared with pattern values for each individual resonance and for sum of all resonances. (M.C.K.) [pt
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika
2013-02-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Conference on Abstract Spaces and Approximation
Szökefalvi-Nagy, B; Abstrakte Räume und Approximation; Abstract spaces and approximation
1969-01-01
The present conference took place at Oberwolfach, July 18-27, 1968, as a direct follow-up on a meeting on Approximation Theory [1] held there from August 4-10, 1963. The emphasis was on theoretical aspects of approximation, rather than the numerical side. Particular importance was placed on the related fields of functional analysis and operator theory. Thirty-nine papers were presented at the conference and one more was subsequently submitted in writing. All of these are included in these proceedings. In addition there is areport on new and unsolved problems based upon a special problem session and later communications from the partici pants. A special role is played by the survey papers also presented in full. They cover a broad range of topics, including invariant subspaces, scattering theory, Wiener-Hopf equations, interpolation theorems, contraction operators, approximation in Banach spaces, etc. The papers have been classified according to subject matter into five chapters, but it needs littl...
Development of the relativistic impulse approximation
International Nuclear Information System (INIS)
Wallace, S.J.
1985-01-01
This talk contains three parts. Part I reviews the developments which led to the relativistic impulse approximation for proton-nucleus scattering. In Part II, problems with the impulse approximation in its original form - principally the low energy problem - are discussed and traced to pionic contributions. Use of pseudovector covariants in place of pseudoscalar ones in the NN amplitude provides more satisfactory low energy results, however, the difference between pseudovector and pseudoscalar results is ambiguous in the sense that it is not controlled by NN data. Only with further theoretical input can the ambiguity be removed. Part III of the talk presents a new development of the relativistic impulse approximation which is the result of work done in the past year and a half in collaboration with J.A. Tjon. A complete NN amplitude representation is developed and a complete set of Lorentz invariant amplitudes are calculated based on a one-meson exchange model and appropriate integral equations. A meson theoretical basis for the important pair contributions to proton-nucleus scattering is established by the new developments. 28 references
Ranking Support Vector Machine with Kernel Approximation
Directory of Open Access Journals (Sweden)
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
A Gaussian Approximation Potential for Silicon
Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor
We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.
Approximate modal analysis using Fourier decomposition
International Nuclear Information System (INIS)
Kozar, Ivica; Jericevic, Zeljko; Pecak, Tatjana
2010-01-01
The paper presents a novel numerical approach for approximate solution of eigenvalue problem and investigates its suitability for modal analysis of structures with special attention on plate structures. The approach is based on Fourier transformation of the matrix equation into frequency domain and subsequent removal of potentially less significant frequencies. The procedure results in a much reduced problem that is used in eigenvalue calculation. After calculation eigenvectors are expanded and transformed back into time domain. The principles are presented in Jericevic [1]. Fourier transform can be formulated in a way that some parts of the matrix that should not be approximated are not transformed but are fully preserved. In this paper we present formulation that preserves central or edge parts of the matrix and compare it with the formulation that performs transform on the whole matrix. Numerical experiments on transformed structural dynamic matrices describe quality of the approximations obtained in modal analysis of structures. On the basis of the numerical experiments, from the three approaches to matrix reduction one is recommended.
Green-Ampt approximations: A comprehensive analysis
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
An Origami Approximation to the Cosmic Web
Neyrinck, Mark C.
2016-10-01
The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.
Function approximation of tasks by neural networks
International Nuclear Information System (INIS)
Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.
2008-01-01
For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.
2011-05-12
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
Blind sensor calibration using approximate message passing
International Nuclear Information System (INIS)
Schülke, Christophe; Caltagirone, Francesco; Zdeborová, Lenka
2015-01-01
The ubiquity of approximately sparse data has led a variety of communities to take great interest in compressed sensing algorithms. Although these are very successful and well understood for linear measurements with additive noise, applying them to real data can be problematic if imperfect sensing devices introduce deviations from this ideal signal acquisition process, caused by sensor decalibration or failure. We propose a message passing algorithm called calibration approximate message passing (Cal-AMP) that can treat a variety of such sensor-induced imperfections. In addition to deriving the general form of the algorithm, we numerically investigate two particular settings. In the first, a fraction of the sensors is faulty, giving readings unrelated to the signal. In the second, sensors are decalibrated and each one introduces a different multiplicative gain to the measurements. Cal-AMP shares the scalability of approximate message passing, allowing us to treat large sized instances of these problems, and experimentally exhibits a phase transition between domains of success and failure. (paper)
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.; Heemink, A.W.; Verlaan, M.; Hoteit, Ibrahim
2011-01-01
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
Local approximation of a metapopulation's equilibrium.
Barbour, A D; McVinish, R; Pollett, P K
2018-04-18
We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.
Approximate particle number projection in hot nuclei
International Nuclear Information System (INIS)
Kosov, D.S.; Vdovin, A.I.
1995-01-01
Heated finite systems like, e.g., hot atomic nuclei have to be described by the canonical partition function. But this is a quite difficult technical problem and, as a rule, the grand canonical partition function is used in the studies. As a result, some shortcomings of the theoretical description appear because of the thermal fluctuations of the number of particles. Moreover, in nuclei with pairing correlations the quantum number fluctuations are introduced by some approximate methods (e.g., by the standard BCS method). The exact particle number projection is very cumbersome and an approximate number projection method for T ≠ 0 basing on the formalism of thermo field dynamics is proposed. The idea of the Lipkin-Nogami method to perform any operator as a series in the number operator powers is used. The system of equations for the coefficients of this expansion is written and the solution of the system in the next approximation after the BCS one is obtained. The method which is of the 'projection after variation' type is applied to a degenerate single j-shell model. 14 refs., 1 tab
Nonresonant approximations to the optical potential
International Nuclear Information System (INIS)
Kowalski, K.L.
1982-01-01
A new class of approximations to the optical potential, which includes those of the multiple-scattering variety, is investigated. These approximations are constructed so that the optical potential maintains the correct unitarity properties along with a proper treatment of nucleon identity. The special case of nucleon-nucleus scattering with complete inclusion of Pauli effects is studied in detail. The treatment is such that the optical potential receives contributions only from subsystems embedded in their own physically correct antisymmetrized subspaces. It is found that a systematic development of even the lowest-order approximations requires the use of the off-shell extension due to Alt, Grassberger, and Sandhas along with a consistent set of dynamical equations for the optical potential. In nucleon-nucleus scattering a lowest-order optical potential is obtained as part of a systematic, exact, inclusive connectivity expansion which is expected to be useful at moderately high energies. This lowest-order potential consists of an energy-shifted (trho)-type term with three-body kinematics plus a heavy-particle exchange or pickup term. The natural appearance of the exchange term additivity in the optical potential clarifies the role of the elastic distortion in connection with the treatment of these processes. The relationship of the relevant aspects of the present analysis of the optical potential to conventional multiple scattering methods is discussed
DEFF Research Database (Denmark)
Sadegh, Payman; Spall, J. C.
1998-01-01
simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright
Regenwetter, Michel; Ho, Moon-Ho R; Tsetlin, Ilia
2007-10-01
This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two types of plurality heuristics to model approval voting behavior. When using a sincere plurality heuristic, voters simplify their decision process by voting for their single favorite candidate. When using a strategic plurality heuristic, voters strategically focus their attention on the 2 front-runners and vote for their preferred candidate among these 2. Using a hierarchy of Thurstonian random utility models, the authors implemented these different decision rules and tested them statistically on 7 real world approval voting elections. They cross-validated their key findings via a psychological Internet experiment. Although a substantial number of voters used the plurality heuristic in the real elections, they did so sincerely, not strategically. Moreover, even though Thurstonian models do not force such agreement, the results show, in contrast to common wisdom about social choice rules, that the sincere social orders by Condorcet, Borda, plurality, and approval voting are identical in all 7 elections and in the Internet experiment. PsycINFO Database Record (c) 2007 APA, all rights reserved.
Fullard, James H.; Ratcliffe, John M.; Jacobs, David S.
2008-03-01
Noctuid moths listen for the echolocation calls of hunting bats and respond to these predator cues with evasive flight. The African bollworm moth, Helicoverpa armigera, feeds at flowers near intensely singing cicadas, Platypleura capensis, yet does not avoid them. We determined that the moth can hear the cicada by observing that both of its auditory receptors (A1 and A2 cells) respond to the cicada’s song. The firing response of the A1 cell rapidly adapts to the song and develops spike periods in less than a second that are in excess of those reported to elicit avoidance flight to bats in earlier studies. The possibility also exists that for at least part of the day, sensory input in the form of olfaction or vision overrides the moth’s auditory responses. While auditory tolerance appears to allow H. armigera to exploit a food resource in close proximity to acoustic interference, it may render their hearing defence ineffective and make them vulnerable to predation by bats during the evening when cicadas continue to sing. Our study describes the first field observation of an eared insect ignoring audible but innocuous sounds.
Induced Compton scattering effects in radiation transport approximations
International Nuclear Information System (INIS)
Gibson, D.R. Jr.
1982-01-01
In this thesis the method of characteristics is used to solve radiation transport problems with induced Compton scattering effects included. The methods used to date have only addressed problems in which either induced Compton scattering is ignored, or problems in which linear scattering is ignored. Also, problems which include both induced Compton scattering and spatial effects have not been considered previously. The introduction of induced scattering into the radiation transport equation results in a quadratic nonlinearity. Methods are developed to solve problems in which both linear and nonlinear Compton scattering are important. Solutions to scattering problems are found for a variety of initial photon energy distributions
Induced Compton-scattering effects in radiation-transport approximations
International Nuclear Information System (INIS)
Gibson, D.R. Jr.
1982-02-01
The method of characteristics is used to solve radiation transport problems with induced Compton scattering effects included. The methods used to date have only addressed problems in which either induced Compton scattering is ignored, or problems in which linear scattering is ignored. Also, problems which include both induced Compton scattering and spatial effects have not been considered previously. The introduction of induced scattering into the radiation transport equation results in a quadratic nonlinearity. Methods are developed to solve problems in which both linear and nonlinear Compton scattering are important. Solutions to scattering problems are found for a variety of initial photon energy distributions
Photoelectron spectroscopy and the dipole approximation
Energy Technology Data Exchange (ETDEWEB)
Hemmers, O.; Hansen, D.L.; Wang, H. [Univ. of Nevada, Las Vegas, NV (United States)] [and others
1997-04-01
Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.
Pentaquarks in the Jaffe-Wilczek approximation
International Nuclear Information System (INIS)
Narodetskii, I.M.; Simonov, Yu.A.; Trusov, M.A.; Semay, C.; Silvestre-Brac, B.
2005-01-01
The masses of uudds-bar, uuddd-bar, and uussd-bar pentaquarks are evaluated in a framework of both the effective Hamiltonian approach to QCD and spinless Salpeter equation using the Jaffe-Wilczek diquark approximation and the string interaction for the diquark-diquark-antiquark system. The pentaquark masses are found to be in the region above 2 GeV. That indicates that the Goldstone-boson-exchange effects may play an important role in the light pentaquarks. The same calculations yield the mass of [ud] 2 c-bar pentaquark ∼3250 MeV and [ud] 2 b-bar pentaquark ∼6509 MeV [ru
Localization and stationary phase approximation on supermanifolds
Zakharevich, Valentin
2017-08-01
Given an odd vector field Q on a supermanifold M and a Q-invariant density μ on M, under certain compactness conditions on Q, the value of the integral ∫Mμ is determined by the value of μ on any neighborhood of the vanishing locus N of Q. We present a formula for the integral in the case where N is a subsupermanifold which is appropriately non-degenerate with respect to Q. In the process, we discuss the linear algebra necessary to express our result in a coordinate independent way. We also extend the stationary phase approximation and the Morse-Bott lemma to supermanifolds.
SAM revisited: uniform semiclassical approximation with absorption
International Nuclear Information System (INIS)
Hussein, M.S.; Pato, M.P.
1986-01-01
The uniform semiclassical approximation is modified to take into account strong absorption. The resulting theory, very similar to the one developed by Frahn and Gross is used to discuss heavy-ion elastic scattering at intermediate energies. The theory permits a reasonably unambiguos separation of refractive and diffractive effects. The systems 12 C+ 12 C and 12 C+ 16 O, which seem to exhibit a remnant of a nuclear rainbow at E=20 Mev/N, are analysed with theory which is built directly on a model for the S-matrix. Simple relations between the fit S-matrix and the underlying complex potential are derived. (Author) [pt
TMB: Automatic differentiation and laplace approximation
DEFF Research Database (Denmark)
Kristensen, Kasper; Nielsen, Anders; Berg, Casper Willestofte
2016-01-01
TMB is an open source R package that enables quick implementation of complex nonlinear random effects (latent variable) models in a manner similar to the established AD Model Builder package (ADMB, http://admb-project.org/; Fournier et al. 2011). In addition, it offers easy access to parallel...... computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects...
Shape theory categorical methods of approximation
Cordier, J M
2008-01-01
This in-depth treatment uses shape theory as a ""case study"" to illustrate situations common to many areas of mathematics, including the use of archetypal models as a basis for systems of approximations. It offers students a unified and consolidated presentation of extensive research from category theory, shape theory, and the study of topological algebras.A short introduction to geometric shape explains specifics of the construction of the shape category and relates it to an abstract definition of shape theory. Upon returning to the geometric base, the text considers simplical complexes and
On one approximation in quantum chromodynamics
International Nuclear Information System (INIS)
Alekseev, A.I.; Bajkov, V.A.; Boos, Eh.Eh.
1982-01-01
Form of a complete fermion propagator near the mass shell is investigated. Considered is a nodel of quantum chromodynamics (MQC) where in the fermion section the Block-Nordsic approximation has been made, i. e. u-numbers are substituted for ν matrices. The model was investigated by means of the Schwinger-Dyson equation for a quark propagator in the infrared region. The Schwinger-Dyson equation was managed to reduce to a differential equation which is easily solved. At that, the Green function is suitable to represent as integral transformation
Static correlation beyond the random phase approximation
DEFF Research Database (Denmark)
Olsen, Thomas; Thygesen, Kristian Sommer
2014-01-01
derived from Hedin's equations (Random Phase Approximation (RPA), Time-dependent Hartree-Fock (TDHF), Bethe-Salpeter equation (BSE), and Time-Dependent GW) all reproduce the correct dissociation limit. We also show that the BSE improves the correlation energies obtained within RPA and TDHF significantly...... and confirms that BSE greatly improves the RPA and TDHF results despite the fact that the BSE excitation spectrum breaks down in the dissociation limit. In contrast, second order screened exchange gives a poor description of the dissociation limit, which can be attributed to the fact that it cannot be derived...
Multi-compartment linear noise approximation
International Nuclear Information System (INIS)
Challenger, Joseph D; McKane, Alan J; Pahle, Jürgen
2012-01-01
The ability to quantify the stochastic fluctuations present in biochemical and other systems is becoming increasing important. Analytical descriptions of these fluctuations are attractive, as stochastic simulations are computationally expensive. Building on previous work, a linear noise approximation is developed for biochemical models with many compartments, for example cells. The procedure is then implemented in the software package COPASI. This technique is illustrated with two simple examples and is then applied to a more realistic biochemical model. Expressions for the noise, given in the form of covariance matrices, are presented. (paper)
Approximation of Moessbauer spectra of metallic glasses
International Nuclear Information System (INIS)
Miglierini, M.; Sitek, J.
1988-01-01
Moessbauer spectra of iron-rich metallic glasses are approximated by means of six broadened lines which have line position relations similar to those of α-Fe. It is shown via the results of the DISPA (dispersion mode vs. absorption mode) line shape analysis that each spectral peak is broadened owing to a sum of Lorentzian lines weighted by a Gaussian distribution in the peak position. Moessbauer parameters of amorphous metallic Fe 83 B 17 and Fe 40 Ni 40 B 20 alloys are presented, derived from the fitted spectra. (author). 2 figs., 2 tabs., 21 refs
High energy approximations in quantum field theory
International Nuclear Information System (INIS)
Orzalesi, C.A.
1975-01-01
New theoretical methods in hadron physics based on a high-energy perturbation theory are discussed. The approximated solutions to quantum field theory obtained by this method appear to be sufficiently simple and rich in structure to encourage hadron dynamics studies. Operator eikonal form for field - theoretic Green's functions is derived and discussion is held on how the eikonal perturbation theory is to be renormalized. This method is extended to massive quantum electrodynamics of scalar charged bosons. Possible developments and applications of this theory are given [pt
Weak field approximation of new general relativity
International Nuclear Information System (INIS)
Fukui, Masayasu; Masukawa, Junnichi
1985-01-01
In the weak field approximation, gravitational field equations of new general relativity with arbitrary parameters are examined. Assuming a conservation law delta sup(μ)T sub(μν) = 0 of the energy-momentum tensor T sub(μν) for matter fields in addition to the usual one delta sup(ν)T sub(μν) = 0, we show that the linearized gravitational field equations are decomposed into equations for a Lorentz scalar field and symmetric and antisymmetric Lorentz tensor fields. (author)
Pentaquarks in the Jaffe-Wilczek Approximation
International Nuclear Information System (INIS)
Narodetskii, I.M.; Simonov, Yu.A.; Trusov, M.A.; Semay, C.; Silvestre-Brac, B.
2005-01-01
The masses of uudds-bar, uuddd-bar, and uussd-bar pentaquarks are evaluated in a framework of both the effective Hamiltonian approach to QCD and the spinless Salpeter equation using the Jaffe-Wilczek diquark approximation and the string interaction for the diquark-diquark-antiquark system. The pentaquark masses are found to be in the region above 2 GeV. That indicates that the Goldstone boson exchange effects may play an important role in the light pentaquarks. The same calculations yield the mass of [ud] 2 c-bar pentaquark ∼3250 MeV and [ud] 2 b-bar pentaquark ∼6509 MeV
Turbo Equalization Using Partial Gaussian Approximation
DEFF Research Database (Denmark)
Zhang, Chuanzong; Wang, Zhongyong; Manchón, Carles Navarro
2016-01-01
This letter deals with turbo equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation propagation rule to convert messages passed from the demodulator and decoder to the equalizer and computes messages...... returned by the equalizer by using a partial Gaussian approximation (PGA). We exploit the specific structure of the ISI channel model to compute the latter messages from the beliefs obtained using a Kalman smoother/equalizer. Doing so leads to a significant complexity reduction compared to the initial PGA...
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2017-07-01
For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 (22α̂)0.50 for 0.020.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 Poisson model dose-response curve. © 2016 Society for Risk Analysis.
Deep-inelastic structure functions in an approximation to the bag theory
International Nuclear Information System (INIS)
Jaffe, R.L.
1975-01-01
A cavity approximation to the bag theory developed earlier is extended to the treatment of forward virtual Compton scattering. In the Bjorken limit and for small values of ω (ω = vertical-bar2p center-dot q/q 2 vertical-bar) it is argued that the operator nature of the bag boundaries might be ignored. Structure functions are calculated in one and three dimensions. Bjorken scaling is obtained. The model provides a realization of light-cone current algebra and possesses a parton interpretation. The structure functions show a quasielastic peak. The spreading of the structure functions about the peak is associated with confinement. As expected, Regge behavior is not obtained for large ω. The ''momentum sum rule'' is saturated, indicating that the hadron's charged constituents carry all the momentum in this model. νW/subL/ is found to scale and is calculable. Application of the model to the calculation of spin-dependent and chiral-symmetry--violating structure functions is proposed. The nature of the intermediate states in this approximation is discussed. Problems associated with the cavity approximation are also discussed
An Oblivious O(1)-Approximation for Single Source Buy-at-Bulk
Goel, Ashish
2009-10-01
We consider the single-source (or single-sink) buy-at-bulk problem with an unknown concave cost function. We want to route a set of demands along a graph to or from a designated root node, and the cost of routing x units of flow along an edge is proportional to some concave, non-decreasing function f such that f(0) = 0. We present a polynomial time algorithm that finds a distribution over trees such that the expected cost of a tree for any f is within an O(1)-factor of the optimum cost for that f. The previous best simultaneous approximation for this problem, even ignoring computation time, was O(log |D|), where D is the multi-set of demand nodes. We design a simple algorithmic framework using the ellipsoid method that finds an O(1)-approximation if one exists, and then construct a separation oracle using a novel adaptation of the Guha, Meyerson, and Munagala [10] algorithm for the single-sink buy-at-bulk problem that proves an O(1) approximation is possible for all f. The number of trees in the support of the distribution constructed by our algorithm is at most 1 + log |D|. © 2009 IEEE.
APPROXIMATING INNOVATION POTENTIAL WITH NEUROFUZZY ROBUST MODEL
Directory of Open Access Journals (Sweden)
Kasa, Richard
2015-01-01
Full Text Available In a remarkably short time, economic globalisation has changed the world’s economic order, bringing new challenges and opportunities to SMEs. These processes pushed the need to measure innovation capability, which has become a crucial issue for today’s economic and political decision makers. Companies cannot compete in this new environment unless they become more innovative and respond more effectively to consumers’ needs and preferences – as mentioned in the EU’s innovation strategy. Decision makers cannot make accurate and efficient decisions without knowing the capability for innovation of companies in a sector or a region. This need is forcing economists to develop an integrated, unified and complete method of measuring, approximating and even forecasting the innovation performance not only on a macro but also a micro level. In this recent article a critical analysis of the literature on innovation potential approximation and prediction is given, showing their weaknesses and a possible alternative that eliminates the limitations and disadvantages of classical measuring and predictive methods.
Analytic approximate radiation effects due to Bremsstrahlung
Energy Technology Data Exchange (ETDEWEB)
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
TMB: Automatic Differentiation and Laplace Approximation
Directory of Open Access Journals (Sweden)
Kasper Kristensen
2016-04-01
Full Text Available TMB is an open source R package that enables quick implementation of complex nonlinear random effects (latent variable models in a manner similar to the established AD Model Builder package (ADMB, http://admb-project.org/; Fournier et al. 2011. In addition, it offers easy access to parallel computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three of the joint likelihood. The computations are designed to be fast for problems with many random effects (≈ 106 and parameters (≈ 103 . Computation times using ADMB and TMB are compared on a suite of examples ranging from simple models to large spatial models where the random effects are a Gaussian random field. Speedups ranging from 1.5 to about 100 are obtained with increasing gains for large problems. The package and examples are available at http://tmb-project.org/.
On some applications of diophantine approximations.
Chudnovsky, G V
1984-03-01
Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to "almost all" numbers. In particular, any such number has the "2 + epsilon" exponent of irrationality: Theta - p/q > q(-2-epsilon) for relatively prime rational integers p,q, with q >/= q(0) (Theta, epsilon). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162].
Detecting Change-Point via Saddlepoint Approximations
Institute of Scientific and Technical Information of China (English)
Zhaoyuan LI; Maozai TIAN
2017-01-01
It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.
Traveling cluster approximation for uncorrelated amorphous systems
International Nuclear Information System (INIS)
Kaplan, T.; Sen, A.K.; Gray, L.J.; Mills, R.
1985-01-01
In this paper, the authors apply the TCA concepts to spatially disordered, uncorrelated systems (e.g., fluids or amorphous metals without short-range order). This is the first approximation scheme for amorphous systems that takes cluster effects into account while preserving the Herglotz property for any amount of disorder. They have performed some computer calculations for the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results are compared with exact calculations (which, in principle, taken into account all cluster effects) and with the CPA, which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA, and yet, apparently, the pair approximation distorts some of the features of the exact results. They conclude that the effects of large clusters are much more important in an uncorrelated liquid metal than in a substitutional alloy. As a result, the pair TCA, which does quite a nice job for alloys, is not adequate for the liquid. Larger clusters must be treated exactly, and therefore an n-TCA with n > 2 must be used
Approximating Markov Chains: What and why
International Nuclear Information System (INIS)
Pincus, S.
1996-01-01
Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics
Approximation to estimation of critical state
International Nuclear Information System (INIS)
Orso, Jose A.; Rosario, Universidad Nacional
2011-01-01
The position of the control rod for the critical state of the nuclear reactor depends on several factors; including, but not limited to the temperature and configuration of the fuel elements inside the core. Therefore, the position can not be known in advance. In this paper theoretical estimations are developed to obtain an equation that allows calculating the position of the control rod for the critical state (approximation to critical) of the nuclear reactor RA-4; and will be used to create a software performing the estimation by entering the count rate of the reactor pulse channel and the length obtained from the control rod (in cm). For the final estimation of the approximation to critical state, a function obtained experimentally indicating control rods reactivity according to the function of their position is used, work is done mathematically to obtain a linear function, which gets the length of the control rod, which has to be removed to get the reactor in critical position. (author) [es
Analytic approximate radiation effects due to Bremsstrahlung
International Nuclear Information System (INIS)
Ben-Zvi, I.
2012-01-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R and D Energy Recovery Linac.
Approximate analytic theory of the multijunction grill
International Nuclear Information System (INIS)
Hurtak, O.; Preinhaelter, J.
1991-03-01
An approximate analytic theory of the general multijunction grill is developed. Omitting the evanescent modes in the subsidiary waveguides both at the junction and at the grill mouth and neglecting multiple wave reflection, simple formulae are derived for the reflection coefficient, the amplitudes of the incident and reflected waves and the spectral power density. These quantities are expressed through the basic grill parameters (the electric length of the structure and phase shift between adjacent waveguides) and two sets of reflection coefficients describing wave reflections in the subsidiary waveguides at the junction and at the plasma. Approximate expressions for these coefficients are also given. The results are compared with a numerical solution of two specific examples; they were shown to be useful for the optimization and design of multijunction grills.For the JET structure it is shown that, in the case of a dense plasma,many results can be obtained from the simple formulae for a two-waveguide multijunction grill. (author) 12 figs., 12 refs
Negara, Ardiansyah
2013-01-01
Anisotropy of hydraulic properties of subsurface geologic formations is an essential feature that has been established as a consequence of the different geologic processes that they undergo during the longer geologic time scale. With respect to petroleum reservoirs, in many cases, anisotropy plays significant role in dictating the direction of flow that becomes no longer dependent only on the pressure gradient direction but also on the principal directions of anisotropy. Furthermore, in complex systems involving the flow of multiphase fluids in which the gravity and the capillarity play an important role, anisotropy can also have important influences. Therefore, there has been great deal of motivation to consider anisotropy when solving the governing conservation laws numerically. Unfortunately, the two-point flux approximation of finite difference approach is not capable of handling full tensor permeability fields. Lately, however, it has been possible to adapt the multipoint flux approximation that can handle anisotropy to the framework of finite difference schemes. In multipoint flux approximation method, the stencil of approximation is more involved, i.e., it requires the involvement of 9-point stencil for the 2-D model and 27-point stencil for the 3-D model. This is apparently challenging and cumbersome when making the global system of equations. In this work, we apply the equation-type approach, which is the experimenting pressure field approach that enables the solution of the global problem breaks into the solution of multitude of local problems that significantly reduce the complexity without affecting the accuracy of numerical solution. This approach also leads in reducing the computational cost during the simulation. We have applied this technique to a variety of anisotropy scenarios of 3-D subsurface flow problems and the numerical results demonstrate that the experimenting pressure field technique fits very well with the multipoint flux approximation
DEFF Research Database (Denmark)
Sadegh, Payman
1997-01-01
This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....
Random phase approximation in relativistic approach
International Nuclear Information System (INIS)
Ma Zhongyu; Yang Ding; Tian Yuan; Cao Ligang
2009-01-01
Some special issues of the random phase approximation(RPA) in the relativistic approach are reviewed. A full consistency and proper treatment of coupling to the continuum are responsible for the successful application of the RPA in the description of dynamical properties of finite nuclei. The fully consistent relativistic RPA(RRPA) requires that the relativistic mean filed (RMF) wave function of the nucleus and the RRPA correlations are calculated in a same effective Lagrangian and the consistent treatment of the Dirac sea of negative energy states. The proper treatment of the single particle continuum with scattering asymptotic conditions in the RMF and RRPA is discussed. The full continuum spectrum can be described by the single particle Green's function and the relativistic continuum RPA is established. A separable form of the paring force is introduced in the relativistic quasi-particle RPA. (authors)
Random-phase approximation and broken symmetry
International Nuclear Information System (INIS)
Davis, E.D.; Heiss, W.D.
1986-01-01
The validity of the random-phase approximation (RPA) in broken-symmetry bases is tested in an appropriate many-body system for which exact solutions are available. Initially the regions of stability of the self-consistent quasiparticle bases in this system are established and depicted in a 'phase' diagram. It is found that only stable bases can be used in an RPA calculation. This is particularly true for those RPA modes which are not associated with the onset of instability of the basis; it is seen that these modes do not describe any excited state when the basis is unstable, although from a formal point of view they remain acceptable. The RPA does well in a stable broken-symmetry basis provided one is not too close to a point where a phase transition occurs. This is true for both energies and matrix elements. (author)
Local facet approximation for image stitching
Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun
2018-01-01
Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.
Approximated solutions to the Schroedinger equation
International Nuclear Information System (INIS)
Rico, J.F.; Fernandez-Alonso, J.I.
1977-01-01
The authors are currently working on a couple of the well-known deficiencies of the variation method and present here some of the results that have been obtained so far. The variation method does not give information a priori on the trial functions best suited for a particular problem nor does it give information a posteriori on the degree of precision attained. In order to clarify the origin of both difficulties, a geometric interpretation of the variation method is presented. This geometric interpretation is the starting point for the exact formal solution to the fundamental state and for the step-by-step approximations to the exact solution which are also given. Some comments on these results are included. (Auth.)
Vortex sheet approximation of boundary layers
International Nuclear Information System (INIS)
Chorin, A.J.
1978-01-01
a grid free method for approximating incomprssible boundary layers is introduced. The computational elements are segments of vortex sheets. The method is related to the earlier vortex method; simplicity is achieved at the cost of replacing the Navier-Stokes equations by the Prandtl boundary layer equations. A new method for generating vorticity at boundaries is also presented; it can be used with the earlier voartex method. The applications presented include (i) flat plate problems, and (ii) a flow problem in a model cylinder- piston assembly, where the new method is used near walls and an improved version of the random choice method is used in the interior. One of the attractive features of the new method is the ease with which it can be incorporated into hybrid algorithms
Approximate Stokes Drift Profiles in Deep Water
Breivik, Øyvind; Janssen, Peter A. E. M.; Bidlot, Jean-Raymond
2014-09-01
A deep-water approximation to the Stokes drift velocity profile is explored as an alternative to the monochromatic profile. The alternative profile investigated relies on the same two quantities required for the monochromatic profile, viz the Stokes transport and the surface Stokes drift velocity. Comparisons with parametric spectra and profiles under wave spectra from the ERA-Interim reanalysis and buoy observations reveal much better agreement than the monochromatic profile even for complex sea states. That the profile gives a closer match and a more correct shear has implications for ocean circulation models since the Coriolis-Stokes force depends on the magnitude and direction of the Stokes drift profile and Langmuir turbulence parameterizations depend sensitively on the shear of the profile. The alternative profile comes at no added numerical cost compared to the monochromatic profile.
Analytical approximations for wide and narrow resonances
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2005-01-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U 238 were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Analytical approximations for wide and narrow resonances
Energy Technology Data Exchange (ETDEWEB)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2005-07-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U{sup 238} were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
The Bloch Approximation in Periodically Perforated Media
International Nuclear Information System (INIS)
Conca, C.; Gomez, D.; Lobo, M.; Perez, E.
2005-01-01
We consider a periodically heterogeneous and perforated medium filling an open domain Ω of R N . Assuming that the size of the periodicity of the structure and of the holes is O(ε),we study the asymptotic behavior, as ε → 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in Ω ε (Ω ε being Ω minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and the first Bloch eigenfunction. We first consider the case where Ωis R N and then localize the problem for abounded domain Ω, considering a homogeneous Dirichlet condition on the boundary of Ω
Approximate analytical modeling of leptospirosis infection
Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani
2017-11-01
Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.
Approximate spacetime symmetries and conservation laws
Energy Technology Data Exchange (ETDEWEB)
Harte, Abraham I [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States)], E-mail: harte@uchicago.edu
2008-10-21
A notion of geometric symmetry is introduced that generalizes the classical concepts of Killing fields and other affine collineations. There is a sense in which flows under these new vector fields minimize deformations of the connection near a specified observer. Any exact affine collineations that may exist are special cases. The remaining vector fields can all be interpreted as analogs of Poincare and other well-known symmetries near timelike worldlines. Approximate conservation laws generated by these objects are discussed for both geodesics and extended matter distributions. One example is a generalized Komar integral that may be taken to define the linear and angular momenta of a spacetime volume as seen by a particular observer. This is evaluated explicitly for a gravitational plane wave spacetime.
Coated sphere scattering by geometric optics approximation.
Mengran, Zhai; Qieni, Lü; Hongxia, Zhang; Yinxin, Zhang
2014-10-01
A new geometric optics model has been developed for the calculation of light scattering by a coated sphere, and the analytic expression for scattering is presented according to whether rays hit the core or not. The ray of various geometric optics approximation (GOA) terms is parameterized by the number of reflections in the coating/core interface, the coating/medium interface, and the number of chords in the core, with the degeneracy path and repeated path terms considered for the rays striking the core, which simplifies the calculation. For the ray missing the core, the various GOA terms are dealt with by a homogeneous sphere. The scattering intensity of coated particles are calculated and then compared with those of Debye series and Aden-Kerker theory. The consistency of the results proves the validity of the method proposed in this work.
Approximation by max-product type operators
Bede, Barnabás; Gal, Sorin G
2016-01-01
This monograph presents a broad treatment of developments in an area of constructive approximation involving the so-called "max-product" type operators. The exposition highlights the max-product operators as those which allow one to obtain, in many cases, more valuable estimates than those obtained by classical approaches. The text considers a wide variety of operators which are studied for a number of interesting problems such as quantitative estimates, convergence, saturation results, localization, to name several. Additionally, the book discusses the perfect analogies between the probabilistic approaches of the classical Bernstein type operators and of the classical convolution operators (non-periodic and periodic cases), and the possibilistic approaches of the max-product variants of these operators. These approaches allow for two natural interpretations of the max-product Bernstein type operators and convolution type operators: firstly, as possibilistic expectations of some fuzzy variables, and secondly,...
Polarized constituent quarks in NLO approximation
International Nuclear Information System (INIS)
Khorramian, Ali N.; Tehrani, S. Atashbar; Mirjalili, A.
2006-01-01
The valon representation provides a basis between hadrons and quarks, in terms of which the bound-state and scattering properties of hadrons can be united and described. We studied polarized valon distributions which have an important role in describing the spin dependence of parton distribution in leading and next-to-leading order approximation. Convolution integral in frame work of valon model as a useful tool, was used in polarized case. To obtain polarized parton distributions in a proton we need to polarized valon distribution in a proton and polarized parton distributions inside the valon. We employed Bernstein polynomial averages to get unknown parameters of polarized valon distributions by fitting to available experimental data
Approximate Sensory Data Collection: A Survey.
Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong
2017-03-10
With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.
Approximate Sensory Data Collection: A Survey
Directory of Open Access Journals (Sweden)
Siyao Cheng
2017-03-01
Full Text Available With the rapid development of the Internet of Things (IoTs, wireless sensor networks (WSNs and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.
Approximate truncation robust computed tomography—ATRACT
International Nuclear Information System (INIS)
Dennerlein, Frank; Maier, Andreas
2013-01-01
We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)
Hydromagnetic turbulence in the direct interaction approximation
International Nuclear Information System (INIS)
Nagarajan, S.
1975-01-01
The dissertation is concerned with the nature of turbulence in a medium with large electrical conductivity. Three distinct though inter-related questions are asked. Firstly, the evolution of a weak, random initial magnetic field in a highly conducting, isotropically turbulent fluid is discussed. This was first discussed in the paper 'Growth of Turbulent Magnetic Fields' by Kraichnan and Nagargian. The Physics of Fluids, volume 10, number 4, 1967. Secondly, the direct interaction approximation for hydromagnetic turbulence maintained by stationary, isotropic, random stirring forces is formulated in the wave-number-frequency domain. Thirdly, the dynamical evolution of a weak, random, magnetic excitation in a turbulent electrically conducting fluid is examined under varying kinematic conditions. (G.T.H.)
Approximation Preserving Reductions among Item Pricing Problems
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.
Approximate direct georeferencing in national coordinates
Legat, Klaus
Direct georeferencing has gained an increasing importance in photogrammetry and remote sensing. Thereby, the parameters of exterior orientation (EO) of an image sensor are determined by GPS/INS, yielding results in a global geocentric reference frame. Photogrammetric products like digital terrain models or orthoimages, however, are often required in national geodetic datums and mapped by national map projections, i.e., in "national coordinates". As the fundamental mathematics of photogrammetry is based on Cartesian coordinates, the scene restitution is often performed in a Cartesian frame located at some central position of the image block. The subsequent transformation to national coordinates is a standard problem in geodesy and can be done in a rigorous manner-at least if the formulas of the map projection are rigorous. Drawbacks of this procedure include practical deficiencies related to the photogrammetric processing as well as the computational cost of transforming the whole scene. To avoid these problems, the paper pursues an alternative processing strategy where the EO parameters are transformed prior to the restitution. If only this transition was done, however, the scene would be systematically distorted. The reason is that the national coordinates are not Cartesian due to the earth curvature and the unavoidable length distortion of map projections. To settle these distortions, several corrections need to be applied. These are treated in detail for both passive and active imaging. Since all these corrections are approximations only, the resulting technique is termed "approximate direct georeferencing". Still, the residual distortions are usually very low as is demonstrated by simulations, rendering the technique an attractive approach to direct georeferencing.
Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias
2018-01-22
Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.
Some properties of dual and approximate dual of fusion frames
Arefijamaal, Ali Akbar; Neyshaburi, Fahimeh Arabyani
2016-01-01
In this paper we extend the notion of approximate dual to fusion frames and present some approaches to obtain dual and approximate alternate dual fusion frames. Also, we study the stability of dual and approximate alternate dual fusion frames.
Approximation algorithms for a genetic diagnostics problem.
Kosaraju, S R; Schäffer, A A; Biesecker, L G
1998-01-01
We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.
Adaptive approximation of higher order posterior statistics
Lee, Wonjung
2014-02-01
Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively. © 2013 Elsevier Inc.
Configuring Airspace Sectors with Approximate Dynamic Programming
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
Rainbows: Mie computations and the Airy approximation.
Wang, R T; van de Hulst, H C
1991-01-01
Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work.
Dynamical Vertex Approximation for the Hubbard Model
Toschi, Alessandro
A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.
Quantum adiabatic approximation and the geometric phase
International Nuclear Information System (INIS)
Mostafazadeh, A.
1997-01-01
A precise definition of an adiabaticity parameter ν of a time-dependent Hamiltonian is proposed. A variation of the time-dependent perturbation theory is presented which yields a series expansion of the evolution operator U(τ)=summation scr(l) U (scr(l)) (τ) with U (scr(l)) (τ) being at least of the order ν scr(l) . In particular, U (0) (τ) corresponds to the adiabatic approximation and yields Berry close-quote s adiabatic phase. It is shown that this series expansion has nothing to do with the 1/τ expansion of U(τ). It is also shown that the nonadiabatic part of the evolution operator is generated by a transformed Hamiltonian which is off-diagonal in the eigenbasis of the initial Hamiltonian. This suggests the introduction of an adiabatic product expansion for U(τ) which turns out to yield exact expressions for U(τ) for a large number of quantum systems. In particular, a simple application of the adiabatic product expansion is used to show that for the Hamiltonian describing the dynamics of a magnetic dipole in an arbitrarily changing magnetic field, there exists another Hamiltonian with the same eigenvectors for which the Schroedinger equation is exactly solvable. Some related issues concerning geometric phases and their physical significance are also discussed. copyright 1997 The American Physical Society
Magnetic reconnection under anisotropic magnetohydrodynamic approximation
International Nuclear Information System (INIS)
Hirabayashi, K.; Hoshino, M.
2013-01-01
We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p ∥ >p ⊥ ) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere
When Density Functional Approximations Meet Iron Oxides.
Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong
2016-10-11
Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe 2 O 3 , Fe 3 O 4 , and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides.
Kim, SungKun; Lee, Hunpyo
2017-06-01
Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.
Hydration thermodynamics beyond the linear response approximation.
Raineri, Fernando O
2016-10-19
The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute
Bond selective chemistry beyond the adiabatic approximation
Energy Technology Data Exchange (ETDEWEB)
Butler, L.J. [Univ. of Chicago, IL (United States)
1993-12-01
One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.
Cophylogeny reconstruction via an approximate Bayesian computation.
Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F
2015-05-01
Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Coronal Loops: Evolving Beyond the Isothermal Approximation
Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.
2002-05-01
Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.
Toward a consistent random phase approximation based on the relativistic Hartree approximation
International Nuclear Information System (INIS)
Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.
1992-01-01
We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data
Directory of Open Access Journals (Sweden)
Andrew W. Woodham
2016-12-01
Full Text Available Human papillomavirus type 16 (HPV16 infections are intra-epithelial, and thus, HPV16 is known to interact with Langerhans cells (LCs, the resident epithelial antigen-presenting cells (APCs. The current paradigm for APC-mediated induction of T cell anergy is through delivery of T cell receptor signals via peptides on MHC molecules (signal 1, but without costimulation (signal 2. We previously demonstrated that LCs exposed to HPV16 in vitro present HPV antigens to T cells without costimulation, but it remained uncertain if such T cells would remain ignorant, become anergic, or in the case of CD4+ T cells, differentiate into Tregs. Here we demonstrate that Tregs were not induced by LCs presenting only signal 1, and through a series of in vitro immunizations show that CD8+ T cells receiving signal 1+2 from LCs weeks after consistently receiving signal 1 are capable of robust effector functions. Importantly, this indicates that T cells are not tolerized but instead remain ignorant to HPV, and are activated given the proper signals. Keywords: T cell anergy, T cell ignorance, Immune tolerance, Human papillomavirus, HPV16, Langerhans cells
Easterbrooks, Susan R; Lytle, Linda R; Sheets, Patricia M; Crook, Bobbie S
2004-01-01
In 2000, the 11th Circuit Court provided the largest single award in special education history to date, approximately $2.5 million, to two teenaged students who were deaf. The students were judged to have been denied a free, appropriate public education (FAPE), having spent their academic careers in generic special education classes for students with multiple disabilities without the benefit of access to a communication system; the services of a certified, qualified teacher of the deaf; or related services. This article describes the case from the perspective of FAPE, least restrictive environment, and due process in the presence of guardians who did not understand the implications of the Individual Education Program (IEP) teams' decisions; presents a chronology of the case; explores the implications for various stakeholders; and discusses the catastrophic impact on the social, emotional, communication, and academic development and earning potential of the students.
La sociologie peut-elle ignorer la phylogenèse de l'esprit ?
Directory of Open Access Journals (Sweden)
Joëlle Proust
2012-01-01
research in present day cognitive science that is relevant to the de facto discussion fails to be taken into account. De jure irreducibility, on the other hand, introduces a dualism in the social sciences that is difficult to justify. The distinction between the epistemic and the cognitive realms is further presented as the ground of a de jure irreducibility; Albert Ogien, however, fails to conclusively establish that social coordination is a necessary precondition of sensitivity to epistemic norms. Louis Quéré, on his part, objects that cognitive science makes an ambiguous use of the concept of concept; a "rich" concept, which cognitive science tends to ignore, involving the understanding of truth, correction, etc., is of crucial relevance to sociology. It is responded that a meager concept of concept (unaccompanied by the analysis of what is epistemically distinctive of concept hood is not only applied to characterize non-propositional thinking in animals; meager concepts are also part of humans' associative and evaluative repertoire, concerning, inter alia, their own capacities, and the trustworthiness of their partners.¿La sociología puede ignorar la filogénesis del pensamiento?Este artículo examina los argumentos de Albert Ogien y de Louis Quéré contra el naturalismo social es decir el proyecto meta teórico que consiste en integrar los conocimientos de lo social, resultantes de la biología evolucionista y de las ciencias cognitivas, en las investigaciones llevadas acabo en el ámbito de las ciencias sociales. Frente a los argumentos de Albert Ogien sobre la irreductibilidad de facto y de jure con respecto a lo cognitivo se objeta que deberían ser tomadas en cuenta investigaciones pertinentes recientes en ciencias cognitivas y que el dogmatismo de facto de lo irreductible provoca el dualismo difícilmente justificable en el seno de las ciencias sociales. Por otra parte si la distinción de jure entre lo epistémico y lo cognitivo, base en que se asienta la
Yang, Dongzheng; Hu, Xixi; Zhang, Dong H.; Xie, Daiqian
2018-02-01
Solving the time-independent close coupling equations of a diatom-diatom inelastic collision system by using the rigorous close-coupling approach is numerically difficult because of its expensive matrix manipulation. The coupled-states approximation decouples the centrifugal matrix by neglecting the important Coriolis couplings completely. In this work, a new approximation method based on the coupled-states approximation is presented and applied to time-independent quantum dynamic calculations. This approach only considers the most important Coriolis coupling with the nearest neighbors and ignores weaker Coriolis couplings with farther K channels. As a result, it reduces the computational costs without a significant loss of accuracy. Numerical tests for para-H2+ortho-H2 and para-H2+HD inelastic collision were carried out and the results showed that the improved method dramatically reduces the errors due to the neglect of the Coriolis couplings in the coupled-states approximation. This strategy should be useful in quantum dynamics of other systems.
Approximal morphology as predictor of approximal caries in primary molar teeth
DEFF Research Database (Denmark)
Cortes, A; Martignon, S; Qvist, V
2018-01-01
consent was given, participated. Upper and lower molar teeth of one randomly selected side received a 2-day temporarily separation. Bitewing radiographs and silicone impressions of interproximal area (IPA) were obtained. One-year procedures were repeated in 52 children (84%). The morphology of the distal...... surfaces of the first molar teeth and the mesial surfaces on the second molar teeth (n=208) was scored from the occlusal aspect on images from the baseline resin models resulting in four IPA variants: concave-concave; concave-convex; convex-concave, and convex-convex. Approximal caries on the surface...
International Nuclear Information System (INIS)
Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao
2014-01-01
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N 4 ). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S ^2 〉 are also developed and tested
Cheon, Sooyoung
2013-02-16
Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.
Energy Technology Data Exchange (ETDEWEB)
Peng, Degao; Yang, Yang; Zhang, Peng [Department of Chemistry, Duke University, Durham, North Carolina 27708 (United States); Yang, Weitao, E-mail: weitao.yang@duke.edu [Department of Chemistry and Department of Physics, Duke University, Durham, North Carolina 27708 (United States)
2014-12-07
In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.
Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai
2013-01-01
Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.
SFU-driven transparent approximation acceleration on GPUs
Li, A.; Song, S.L.; Wijtvliet, M.; Kumar, A.; Corporaal, H.
2016-01-01
Approximate computing, the technique that sacrifices certain amount of accuracy in exchange for substantial performance boost or power reduction, is one of the most promising solutions to enable power control and performance scaling towards exascale. Although most existing approximation designs
Some approximate calculations in SU2 lattice mean field theory
International Nuclear Information System (INIS)
Hari Dass, N.D.; Lauwers, P.G.
1981-12-01
Approximate calculations are performed for small Wilson loops of SU 2 lattice gauge theory in mean field approximation. Reasonable agreement is found with Monte Carlo data. Ways of improving these calculations are discussed. (Auth.)
Coefficients Calculation in Pascal Approximation for Passive Filter Design
Directory of Open Access Journals (Sweden)
George B. Kasapoglu
2018-02-01
Full Text Available The recently modified Pascal function is further exploited in this paper in the design of passive analog filters. The Pascal approximation has non-equiripple magnitude, in contrast of the most well-known approximations, such as the Chebyshev approximation. A novelty of this work is the introduction of a precise method that calculates the coefficients of the Pascal function. Two examples are presented for the passive design to illustrate the advantages and the disadvantages of the Pascal approximation. Moreover, the values of the passive elements can be taken from tables, which are created to define the normalized values of these elements for the Pascal approximation, as Zverev had done for the Chebyshev, Elliptic, and other approximations. Although Pascal approximation can be implemented to both passive and active filter designs, a passive filter design is addressed in this paper, and the benefits and shortcomings of Pascal approximation are presented and discussed.
Approximate viability for nonlinear evolution inclusions with application to controllability
Directory of Open Access Journals (Sweden)
Omar Benniche
2016-12-01
Full Text Available We investigate approximate viability for a graph with respect to fully nonlinear quasi-autonomous evolution inclusions. As application, an approximate null controllability result is given.
PWL approximation of nonlinear dynamical systems, part I: structural stability
International Nuclear Information System (INIS)
Storace, M; De Feo, O
2005-01-01
This paper and its companion address the problem of the approximation/identification of nonlinear dynamical systems depending on parameters, with a view to their circuit implementation. The proposed method is based on a piecewise-linear approximation technique. In particular, this paper describes the approximation method and applies it to some particularly significant dynamical systems (topological normal forms). The structural stability of the PWL approximations of such systems is investigated through a bifurcation analysis (via continuation methods)
Pawlak algebra and approximate structure on fuzzy lattice.
Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai
2014-01-01
The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.
Comparison of four support-vector based function approximators
de Kruif, B.J.; de Vries, Theodorus J.A.
2004-01-01
One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been
Explicitly solvable complex Chebyshev approximation problems related to sine polynomials
Freund, Roland
1989-01-01
Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.
Aspects of three field approximations: Darwin, frozen, EMPULSE
International Nuclear Information System (INIS)
Boyd, J.K.; Lee, E.P.; Yu, S.S.
1985-01-01
The traditional approach used to study high energy beam propagation relies on the frozen field approximation. A minor modification of the frozen field approximation yields the set of equations applied to the analysis of the hose instability. These models are constrasted with the Darwin field approximation. A statement is made of the Darwin model equations relevant to the analysis of the hose instability
Approximation Properties of Certain Summation Integral Type Operators
Directory of Open Access Journals (Sweden)
Patel P.
2015-03-01
Full Text Available In the present paper, we study approximation properties of a family of linear positive operators and establish direct results, asymptotic formula, rate of convergence, weighted approximation theorem, inverse theorem and better approximation for this family of linear positive operators.
On Love's approximation for fluid-filled elastic tubes
International Nuclear Information System (INIS)
Caroli, E.; Mainardi, F.
1980-01-01
A simple procedure is set up to introduce Love's approximation for wave propagation in thin-walled fluid-filled elastic tubes. The dispersion relation for linear waves and the radial profile for fluid pressure are determined in this approximation. It is shown that the Love approximation is valid in the low-frequency regime. (author)
DEFF Research Database (Denmark)
Olsen, Thomas; Thygesen, Kristian S.
2012-01-01
The adiabatic connection fluctuation-dissipation theorem with the random phase approximation (RPA) has recently been applied with success to obtain correlation energies of a variety of chemical and solid state systems. The main merit of this approach is the improved description of dispersive forces...... while chemical bond strengths and absolute correlation energies are systematically underestimated. In this work we extend the RPA by including a parameter-free renormalized version of the adiabatic local-density (ALDA) exchange-correlation kernel. The renormalization consists of a (local) truncation...... of the ALDA kernel for wave vectors q > 2kF, which is found to yield excellent results for the homogeneous electron gas. In addition, the kernel significantly improves both the absolute correlation energies and atomization energies of small molecules over RPA and ALDA. The renormalization can...