WorldWideScience

Sample records for surprisingly large number

  1. Some Surprising Introductory Physics Facts and Numbers

    Science.gov (United States)

    Mallmann, A. James

    2016-01-01

    In the entertainment world, people usually like, and find memorable, novels, short stories, and movies with surprise endings. This suggests that classroom teachers might want to present to their students examples of surprising facts associated with principles of physics. Possible benefits of finding surprising facts about principles of physics are…

  2. Surprise, Recipes for Surprise, and Social Influence.

    Science.gov (United States)

    Loewenstein, Jeffrey

    2018-02-07

    Surprising people can provide an opening for influencing them. Surprises garner attention, are arousing, are memorable, and can prompt shifts in understanding. Less noted is that, as a result, surprises can serve to persuade others by leading them to shifts in attitudes. Furthermore, because stories, pictures, and music can generate surprises and those can be widely shared, surprise can have broad social influence. People also tend to share surprising items with others, as anyone on social media has discovered. This means that in addition to broadcasting surprising information, surprising items can also spread through networks. The joint result is that surprise not only has individual effects on beliefs and attitudes but also collective effects on the content of culture. Items that generate surprise need not be random or accidental. There are predictable methods or recipes for generating surprise. One such recipe is discussed, the repetition-break plot structure, to explore the psychological and social possibilities of examining surprise. Recipes for surprise offer a useful means for understanding how surprise works and offer prospects for harnessing surprise to a wide array of ends. Copyright © 2017 Cognitive Science Society, Inc.

  3. Surprise... Surprise..., An Empirical Investigation on How Surprise is Connected to Customer Satisfaction

    NARCIS (Netherlands)

    J. Vanhamme (Joëlle)

    2003-01-01

    textabstractThis research investigates the specific influence of the emotion of surprise on customer transaction-specific satisfaction. Four empirical studies-two field studies (a diary study and a cross section survey) and two experiments-were conducted. The results show that surprise positively

  4. Surprisal analysis and probability matrices for rotational energy transfer

    International Nuclear Information System (INIS)

    Levine, R.D.; Bernstein, R.B.; Kahana, P.; Procaccia, I.; Upchurch, E.T.

    1976-01-01

    The information-theoretic approach is applied to the analysis of state-to-state rotational energy transfer cross sections. The rotational surprisal is evaluated in the usual way, in terms of the deviance of the cross sections from their reference (''prior'') values. The surprisal is found to be an essentially linear function of the energy transferred. This behavior accounts for the experimentally observed exponential gap law for the hydrogen halide systems. The data base here analyzed (taken from the literature) is largely computational in origin: quantal calculations for the hydrogenic systems H 2 +H, He, Li + ; HD+He; D 2 +H and for the N 2 +Ar system; and classical trajectory results for H 2 +Li + ; D 2 +Li + and N 2 +Ar. The surprisal analysis not only serves to compact a large body of data but also aids in the interpretation of the results. A single surprisal parameter theta/subR/ suffices to account for the (relative) magnitude of all state-to-state inelastic cross sections at a given energy

  5. Exploration, Novelty, Surprise and Free Energy Minimisation

    Directory of Open Access Journals (Sweden)

    Philipp eSchwartenbeck

    2013-10-01

    Full Text Available This paper reviews recent developments under the free energy principle that introduce a normative perspective on classical economic (utilitarian decision-making based on (active Bayesian inference. It has been suggested that the free energy principle precludes novelty and complexity, because it assumes that biological systems – like ourselves - try to minimise the long-term average of surprise to maintain their homeostasis. However, recent formulations show that minimising surprise leads naturally to concepts such as exploration and novelty bonuses. In this approach, agents infer a policy that minimises surprise by minimising the difference (or relative entropy between likely and desired outcomes, which involves both pursuing the goal-state that has the highest expected utility (often termed ‘exploitation’ and visiting a number of different goal-states (‘exploration’. Crucially, the opportunity to visit new states increases the value of the current state. Casting decision-making problems within a variational framework, therefore, predicts that our behaviour is governed by both the entropy and expected utility of future states. This dissolves any dialectic between minimising surprise and exploration or novelty seeking.

  6. Surprise and Memory as Indices of Concrete Operational Development

    Science.gov (United States)

    Achenbach, Thomas M.

    1973-01-01

    Normal and retarded children's use of color, number, length and continuous quantity as attributes of identification was assessed by presenting them with contrived changes in three properties. Surprise and correct memory responses for color preceded those to number, which preceded logical verbal responses to a conventional number-conservation task.…

  7. Surprise Trips

    DEFF Research Database (Denmark)

    Korn, Matthias; Kawash, Raghid; Andersen, Lisbet Møller

    2010-01-01

    We report on a platform that augments the natural experience of exploration in diverse indoor and outdoor environments. The system builds on the theme of surprises in terms of user expectations and finding points of interest. It utilizes physical icons as representations of users' interests...... and as notification tokens to alert users when they are within proximity of a surprise. To evaluate the concept, we developed mock-ups, a video prototype and conducted a wizard-of-oz user test for a national park in Denmark....

  8. Ontological Surprises

    DEFF Research Database (Denmark)

    Leahu, Lucian

    2016-01-01

    a hybrid approach where machine learning algorithms are used to identify objects as well as connections between them; finally, it argues for remaining open to ontological surprises in machine learning as they may enable the crafting of different relations with and through technologies.......This paper investigates how we might rethink design as the technological crafting of human-machine relations in the context of a machine learning technique called neural networks. It analyzes Google’s Inceptionism project, which uses neural networks for image recognition. The surprising output...

  9. An efficient community detection algorithm using greedy surprise maximization

    International Nuclear Information System (INIS)

    Jiang, Yawen; Jia, Caiyan; Yu, Jian

    2014-01-01

    Community detection is an important and crucial problem in complex network analysis. Although classical modularity function optimization approaches are widely used for identifying communities, the modularity function (Q) suffers from its resolution limit. Recently, the surprise function (S) was experimentally proved to be better than the Q function. However, up until now, there has been no algorithm available to perform searches to directly determine the maximal surprise values. In this paper, considering the superiority of the S function over the Q function, we propose an efficient community detection algorithm called AGSO (algorithm based on greedy surprise optimization) and its improved version FAGSO (fast-AGSO), which are based on greedy surprise optimization and do not suffer from the resolution limit. In addition, (F)AGSO does not need the number of communities K to be specified in advance. Tests on experimental networks show that (F)AGSO is able to detect optimal partitions in both simple and even more complex networks. Moreover, algorithms based on surprise maximization perform better than those algorithms based on modularity maximization, including Blondel–Guillaume–Lambiotte–Lefebvre (BGLL), Clauset–Newman–Moore (CNM) and the other state-of-the-art algorithms such as Infomap, order statistics local optimization method (OSLOM) and label propagation algorithm (LPA). (paper)

  10. Thermal convection for large Prandtl numbers

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef

    2001-01-01

    The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer

  11. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  12. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  13. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  14. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  15. Cloud Surprises in Moving NASA EOSDIS Applications into Amazon Web Services

    Science.gov (United States)

    Mclaughlin, Brett

    2017-01-01

    NASA ESDIS has been moving a variety of data ingest, distribution, and science data processing applications into a cloud environment over the last 2 years. As expected, there have been a number of challenges in migrating primarily on-premises applications into a cloud-based environment, related to architecture and taking advantage of cloud-based services. What was not expected is a number of issues that were beyond purely technical application re-architectures. We ran into surprising network policy limitations, billing challenges in a government-based cost model, and difficulty in obtaining certificates in an NASA security-compliant manner. On the other hand, this approach has allowed us to move a number of applications from local hosting to the cloud in a matter of hours (yes, hours!!), and our CMR application now services 95% of granule searches and an astonishing 99% of all collection searches in under a second. And most surprising of all, well, you'll just have to wait and see the realization that caught our entire team off guard!

  16. Climate Change as a Predictable Surprise

    International Nuclear Information System (INIS)

    Bazerman, M.H.

    2006-01-01

    In this article, I analyze climate change as a 'predictable surprise', an event that leads an organization or nation to react with surprise, despite the fact that the information necessary to anticipate the event and its consequences was available (Bazerman and Watkins, 2004). I then assess the cognitive, organizational, and political reasons why society fails to implement wise strategies to prevent predictable surprises generally and climate change specifically. Finally, I conclude with an outline of a set of response strategies to overcome barriers to change

  17. A surprising palmar nevus: A case report

    Directory of Open Access Journals (Sweden)

    Rana Rafiei

    2018-02-01

    Full Text Available Raised palmar or plantar nevus especially in white people is an unusual feature. We present an uncommon palmar compound nevus in a 26-year-old woman with a large diameter (6 mm which had a collaret-shaped margin. In histopathologic evaluation intralymphatic protrusions of nevic nests were noted. This case was surprising to us for these reasons: size, shape, location and histopathology of the lesion. Palmar nevi are usually junctional (flat and below 3 mm diameter and intra lymphatic protrusion or invasion in nevi is an extremely rare phenomenon.

  18. Surprise Gift” Purchases of Small Electric Appliances: A Pilot Study

    NARCIS (Netherlands)

    J. Vanhamme (Joëlle); C.J.P.M. de Bont (Cees)

    2005-01-01

    textabstractUnderstanding decision-making processes for gifts is of strategic importance for companies selling small electrical appliances as gifts account for a large part of their sales. Among all gifts, the ones that are surprising are the most valued by recipients. However, research about

  19. A toolkit for detecting technical surprise.

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Michael Wayne; Foehse, Mark C.

    2010-10-01

    The detection of a scientific or technological surprise within a secretive country or institute is very difficult. The ability to detect such surprises would allow analysts to identify the capabilities that could be a military or economic threat to national security. Sandia's current approach utilizing ThreatView has been successful in revealing potential technological surprises. However, as data sets become larger, it becomes critical to use algorithms as filters along with the visualization environments. Our two-year LDRD had two primary goals. First, we developed a tool, a Self-Organizing Map (SOM), to extend ThreatView and improve our understanding of the issues involved in working with textual data sets. Second, we developed a toolkit for detecting indicators of technical surprise in textual data sets. Our toolkit has been successfully used to perform technology assessments for the Science & Technology Intelligence (S&TI) program.

  20. Corrugator Activity Confirms Immediate Negative Affect in Surprise

    Directory of Open Access Journals (Sweden)

    Sascha eTopolinski

    2015-02-01

    Full Text Available The emotion of surprise entails a complex of immediate responses, such as cognitive interruption, attention allocation to, and more systematic processing of the surprising stimulus. All these processes serve the ultimate function to increase processing depth and thus cognitively master the surprising stimulus. The present account introduces phasic negative affect as the underlying mechanism responsible for these consequences. Surprising stimuli are schema-discrepant and thus entail cognitive disfluency, which elicits immediate negative affect. This affect in turn works like a phasic cognitive tuning switching the current processing mode from more automatic and heuristic to more systematic and reflective processing. Directly testing the initial elicitation of negative affect by suprising events, the present experiment presented high and low surprising neutral trivia statements to N = 28 participants while assessing their spontaneous facial expressions via facial electromyography. High compared to low suprising trivia elicited higher corrugator activity, indicative of negative affect and mental effort, while leaving zygomaticus (positive affect and frontalis (cultural surprise expression activity unaffected. Future research shall investigate the mediating role of negative affect in eliciting surprise-related outcomes.

  1. The Influence of Negative Surprise on Hedonic Adaptation

    Directory of Open Access Journals (Sweden)

    Ana Paula Kieling

    2016-01-01

    Full Text Available After some time using a product or service, the consumer tends to feel less pleasure with consumption. This reduction of pleasure is known as hedonic adaptation. One of the emotions that interfere in this process is surprise. Based on two experiments, we suggest that negative surprise – differently to positive – influences with the level of pleasure foreseen and experienced by the consumer. Study 1 analyzes the influence of negative (vs. positive surprise on the consumer’s post-purchase hedonic adaptation expectation. Results showed that negative surprise influences the intensity of adaptation, augmenting its strength. Study 2 verifies the influence of negative (vs positive surprise over hedonic adaptation. The findings suggested that negative surprise makes adaptation happen more intensively and faster as time goes by, which brings consequences to companies and consumers in the post-purchase process, such as satisfaction and loyalty.

  2. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  3. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  4. Surprise: a belief or an emotion?

    Science.gov (United States)

    Mellers, Barbara; Fincher, Katrina; Drummond, Caitlin; Bigony, Michelle

    2013-01-01

    Surprise is a fundamental link between cognition and emotion. It is shaped by cognitive assessments of likelihood, intuition, and superstition, and it in turn shapes hedonic experiences. We examine this connection between cognition and emotion and offer an explanation called decision affect theory. Our theory predicts the affective consequences of mistaken beliefs, such as overconfidence and hindsight. It provides insight about why the pleasure of a gain can loom larger than the pain of a comparable loss. Finally, it explains cross-cultural differences in emotional reactions to surprising events. By changing the nature of the unexpected (from chance to good luck), one can alter the emotional reaction to surprising events. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  6. Exploring the concept of climate surprises. A review of the literature on the concept of surprise and how it is related to climate change

    International Nuclear Information System (INIS)

    Glantz, M.H.; Moore, C.M.; Streets, D.G.; Bhatti, N.; Rosa, C.H.

    1998-01-01

    This report examines the concept of climate surprise and its implications for environmental policymaking. Although most integrated assessment models of climate change deal with average values of change, it is usually the extreme events or surprises that cause the most damage to human health and property. Current models do not help the policymaker decide how to deal with climate surprises. This report examines the literature of surprise in many aspects of human society: psychology, military, health care, humor, agriculture, etc. It draws together various ways to consider the concept of surprise and examines different taxonomies of surprise that have been proposed. In many ways, surprise is revealed to be a subjective concept, triggered by such factors as prior experience, belief system, and level of education. How policymakers have reacted to specific instances of climate change or climate surprise in the past is considered, particularly with regard to the choices they made between proactive and reactive measures. Finally, the report discusses techniques used in the current generation of assessment models and makes suggestions as to how climate surprises might be included in future models. The report concludes that some kinds of surprises are simply unpredictable, but there are several types that could in some way be anticipated and assessed, and their negative effects forestalled

  7. Exploring the concept of climate surprises. A review of the literature on the concept of surprise and how it is related to climate change

    Energy Technology Data Exchange (ETDEWEB)

    Glantz, M.H.; Moore, C.M. [National Center for Atmospheric Research, Boulder, CO (United States); Streets, D.G.; Bhatti, N.; Rosa, C.H. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.; Stewart, T.R. [State Univ. of New York, Albany, NY (United States)

    1998-01-01

    This report examines the concept of climate surprise and its implications for environmental policymaking. Although most integrated assessment models of climate change deal with average values of change, it is usually the extreme events or surprises that cause the most damage to human health and property. Current models do not help the policymaker decide how to deal with climate surprises. This report examines the literature of surprise in many aspects of human society: psychology, military, health care, humor, agriculture, etc. It draws together various ways to consider the concept of surprise and examines different taxonomies of surprise that have been proposed. In many ways, surprise is revealed to be a subjective concept, triggered by such factors as prior experience, belief system, and level of education. How policymakers have reacted to specific instances of climate change or climate surprise in the past is considered, particularly with regard to the choices they made between proactive and reactive measures. Finally, the report discusses techniques used in the current generation of assessment models and makes suggestions as to how climate surprises might be included in future models. The report concludes that some kinds of surprises are simply unpredictable, but there are several types that could in some way be anticipated and assessed, and their negative effects forestalled.

  8. Surprises and counterexamples in real function theory

    CERN Document Server

    Rajwade, A R

    2007-01-01

    This book presents a variety of intriguing, surprising and appealing topics and nonroutine theorems in real function theory. It is a reference book to which one can turn for finding that arise while studying or teaching analysis.Chapter 1 is an introduction to algebraic, irrational and transcendental numbers and contains the Cantor ternary set. Chapter 2 contains functions with extraordinary properties; functions that are continuous at each point but differentiable at no point. Chapters 4 and intermediate value property, periodic functions, Rolle's theorem, Taylor's theorem, points of tangents. Chapter 6 discusses sequences and series. It includes the restricted harmonic series, of alternating harmonic series and some number theoretic aspects. In Chapter 7, the infinite peculiar range of convergence is studied. Appendix I deal with some specialized topics. Exercises at the end of chapters and their solutions are provided in Appendix II.This book will be useful for students and teachers alike.

  9. The role of surprise in satisfaction judgements

    NARCIS (Netherlands)

    Vanhamme, J.; Snelders, H.M.J.J.

    2001-01-01

    Empirical findings suggest that surprise plays an important role in consumer satisfaction, but there is a lack of theory to explain why this is so. The present paper provides explanations for the process through which positive (negative) surprise might enhance (reduce) consumer satisfaction. First,

  10. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  11. A Dichotomic Analysis of the Surprise Examination Paradox

    OpenAIRE

    Franceschi, Paul

    2002-01-01

    This paper presents a dichotomic analysis of the surprise examination paradox. In section 1, I analyse the surprise notion in detail. I introduce then in section 2, the distinction between a monist and dichotomic analysis of the paradox. I also present there a dichotomy leading to distinguish two basically and structurally different versions of the paradox, respectively based on a conjoint and a disjoint definition of the surprise. In section 3, I describe the solution to SEP corresponding to...

  12. Spatiotemporal neural characterization of prediction error valence and surprise during reward learning in humans.

    Science.gov (United States)

    Fouragnan, Elsa; Queirazza, Filippo; Retzler, Chris; Mullinger, Karen J; Philiastides, Marios G

    2017-07-06

    Reward learning depends on accurate reward associations with potential choices. These associations can be attained with reinforcement learning mechanisms using a reward prediction error (RPE) signal (the difference between actual and expected rewards) for updating future reward expectations. Despite an extensive body of literature on the influence of RPE on learning, little has been done to investigate the potentially separate contributions of RPE valence (positive or negative) and surprise (absolute degree of deviation from expectations). Here, we coupled single-trial electroencephalography with simultaneously acquired fMRI, during a probabilistic reversal-learning task, to offer evidence of temporally overlapping but largely distinct spatial representations of RPE valence and surprise. Electrophysiological variability in RPE valence correlated with activity in regions of the human reward network promoting approach or avoidance learning. Electrophysiological variability in RPE surprise correlated primarily with activity in regions of the human attentional network controlling the speed of learning. Crucially, despite the largely separate spatial extend of these representations our EEG-informed fMRI approach uniquely revealed a linear superposition of the two RPE components in a smaller network encompassing visuo-mnemonic and reward areas. Activity in this network was further predictive of stimulus value updating indicating a comparable contribution of both signals to reward learning.

  13. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  14. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  15. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  16. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  17. A Contrast-Based Computational Model of Surprise and Its Applications.

    Science.gov (United States)

    Macedo, Luis; Cardoso, Amílcar

    2017-11-19

    We review our work on a contrast-based computational model of surprise and its applications. The review is contextualized within related research from psychology, philosophy, and particularly artificial intelligence. Influenced by psychological theories of surprise, the model assumes that surprise-eliciting events initiate a series of cognitive processes that begin with the appraisal of the event as unexpected, continue with the interruption of ongoing activity and the focusing of attention on the unexpected event, and culminate in the analysis and evaluation of the event and the revision of beliefs. It is assumed that the intensity of surprise elicited by an event is a nonlinear function of the difference or contrast between the subjective probability of the event and that of the most probable alternative event (which is usually the expected event); and that the agent's behavior is partly controlled by actual and anticipated surprise. We describe applications of artificial agents that incorporate the proposed surprise model in three domains: the exploration of unknown environments, creativity, and intelligent transportation systems. These applications demonstrate the importance of surprise for decision making, active learning, creative reasoning, and selective attention. Copyright © 2017 Cognitive Science Society, Inc.

  18. The Role of Surprise in Game-Based Learning for Mathematics

    NARCIS (Netherlands)

    Wouters, Pieter; van Oostendorp, Herre; ter Vrugte, Judith; Vandercruysse, Sylke; de Jong, Anthonius J.M.; Elen, Jan; De Gloria, Alessandro; Veltkamp, Remco

    2016-01-01

    In this paper we investigate the potential of surprise on learning with prevocational students in the domain of proportional reasoning. Surprise involves an emotional reaction, but it also serves a cognitive goal as it directs attention to explain why the surprising event occurred and to learn for

  19. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  20. A Neural Mechanism for Surprise-related Interruptions of Visuospatial Working Memory.

    Science.gov (United States)

    Wessel, Jan R

    2018-01-01

    Surprising perceptual events recruit a fronto-basal ganglia mechanism for inhibition, which suppresses motor activity following surprise. A recent study found that this inhibitory mechanism also disrupts the maintenance of verbal working memory (WM) after surprising tones. However, it is unclear whether this same mechanism also relates to surprise-related interruptions of non-verbal WM. We tested this hypothesis using a change-detection task, in which surprising tones impaired visuospatial WM. Participants also performed a stop-signal task (SST). We used independent component analysis and single-trial scalp-electroencephalogram to test whether the same inhibitory mechanism that reflects motor inhibition in the SST relates to surprise-related visuospatial WM decrements, as was the case for verbal WM. As expected, surprising tones elicited activity of the inhibitory mechanism, and this activity correlated strongly with the trial-by-trial level of surprise. However, unlike for verbal WM, the activity of this mechanism was unrelated to visuospatial WM accuracy. Instead, inhibition-independent activity that immediately succeeded the inhibitory mechanism was increased when visuospatial WM was disrupted. This shows that surprise-related interruptions of visuospatial WM are not effected by the same inhibitory mechanism that interrupts verbal WM, and instead provides evidence for a 2-stage model of distraction. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Teaching Multiplication of Large Positive Whole Numbers Using ...

    African Journals Online (AJOL)

    This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...

  2. Human amygdala response to dynamic facial expressions of positive and negative surprise.

    Science.gov (United States)

    Vrticka, Pascal; Lordier, Lara; Bediou, Benoît; Sander, David

    2014-02-01

    Although brain imaging evidence accumulates to suggest that the amygdala plays a key role in the processing of novel stimuli, only little is known about its role in processing expressed novelty conveyed by surprised faces, and even less about possible interactive encoding of novelty and valence. Those investigations that have already probed human amygdala involvement in the processing of surprised facial expressions either used static pictures displaying negative surprise (as contained in fear) or "neutral" surprise, and manipulated valence by contextually priming or subjectively associating static surprise with either negative or positive information. Therefore, it still remains unresolved how the human amygdala differentially processes dynamic surprised facial expressions displaying either positive or negative surprise. Here, we created new artificial dynamic 3-dimensional facial expressions conveying surprise with an intrinsic positive (wonderment) or negative (fear) connotation, but also intrinsic positive (joy) or negative (anxiety) emotions not containing any surprise, in addition to neutral facial displays either containing ("typical surprise" expression) or not containing ("neutral") surprise. Results showed heightened amygdala activity to faces containing positive (vs. negative) surprise, which may either correspond to a specific wonderment effect as such, or to the computation of a negative expected value prediction error. Findings are discussed in the light of data obtained from a closely matched nonsocial lottery task, which revealed overlapping activity within the left amygdala to unexpected positive outcomes. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. 'Surprise': Outbreak of Campylobacter infection associated with chicken liver pâté at a surprise birthday party, Adelaide, Australia, 2012.

    Science.gov (United States)

    Parry, Amy; Fearnley, Emily; Denehy, Emma

    2012-10-01

    In July 2012, an outbreak of Campylobacter infection was investigated by the South Australian Communicable Disease Control Branch and Food Policy and Programs Branch. The initial notification identified illness at a surprise birthday party held at a restaurant on 14 July 2012. The objective of the investigation was to identify the potential source of infection and institute appropriate intervention strategies to prevent further illness. A guest list was obtained and a retrospective cohort study undertaken. A combination of paper-based and telephone questionnaires were used to collect exposure and outcome information. An environmental investigation was conducted by Food Policy and Programs Branch at the implicated premises. All 57 guests completed the questionnaire (100% response rate), and 15 met the case definition. Analysis showed a significant association between illness and consumption of chicken liver pâté (relative risk: 16.7, 95% confidence interval: 2.4-118.6). No other food or beverage served at the party was associated with illness. Three guests submitted stool samples; all were positive for Campylobacter. The environmental investigation identified that the cooking process used in the preparation of chicken liver pâté may have been inconsistent, resulting in some portions not cooked adequately to inactivate potential Campylobacter contamination. Chicken liver products are a known source of Campylobacter infection; therefore, education of food handlers remains a high priority. To better identify outbreaks among the large number of Campylobacter notifications, routine typing of Campylobacter isolates is recommended.

  4. Lovelock inflation and the number of large dimensions

    CERN Document Server

    Ferrer, Francesc

    2007-01-01

    We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.

  5. The Value of Surprising Findings for Research on Marketing

    OpenAIRE

    JS Armstrong

    2004-01-01

    In the work of Armstrong (Journal of Business Research, 2002), I examined empirical research on the scientific process and related these to marketing science. The findings of some studies were surprising. In this reply, I address surprising findings and other issues raised by commentators.

  6. Radar Design to Protect Against Surprise

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Technological and doctrinal surprise is about rendering preparations for conflict as irrelevant or ineffective . For a sensor, this means essentially rendering the sensor as irrelevant or ineffective in its ability to help determine truth. Recovery from this sort of surprise is facilitated by flexibility in our own technology and doctrine. For a sensor, this mean s flexibility in its architecture, design, tactics, and the designing organizations ' processes. - 4 - Acknowledgements This report is the result of a n unfunded research and development activity . Sandia National Laboratories is a multi - program laboratory manage d and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.

  7. Cloud Surprises Discovered in Moving NASA EOSDIS Applications into Amazon Web Services… and #6 Will Shock You!

    Science.gov (United States)

    McLaughlin, B. D.; Pawloski, A. W.

    2017-12-01

    NASA ESDIS has been moving a variety of data ingest, distribution, and science data processing applications into a cloud environment over the last 2 years. As expected, there have been a number of challenges in migrating primarily on-premises applications into a cloud-based environment, related to architecture and taking advantage of cloud-based services. What was not expected is a number of issues that were beyond purely technical application re-architectures. From surprising network policy limitations, billing challenges in a government-based cost model, and obtaining certificates in an NASA security-compliant manner to working with multiple applications in a shared and resource-constrained AWS account, these have been the relevant challenges in taking advantage of a cloud model. And most surprising of all… well, you'll just have to wait and see the "gotcha" that caught our entire team off guard!

  8. Surprise as a design strategy

    NARCIS (Netherlands)

    Ludden, G.D.S.; Schifferstein, H.N.J.; Hekkert, P.P.M.

    2008-01-01

    Imagine yourself queuing for the cashier’s desk in a supermarket. Naturally, you have picked the wrong line, the one that does not seem to move at all. Soon, you get tired of waiting. Now, how would you feel if the cashier suddenly started to sing? Many of us would be surprised and, regardless of

  9. Surprising Incentive: An Instrument for Promoting Safety Performance of Construction Employees

    Directory of Open Access Journals (Sweden)

    Fakhradin Ghasemi

    2015-09-01

    Conclusion: The results of this study proved that the surprising incentive would improve the employees' safety performance just in the short term because the surprising value of the incentives dwindle over time. For this reason and to maintain the surprising value of the incentive system, the amount and types of incentives need to be evaluated and modified annually or biannually.

  10. Surprises in the suddenly-expanded infinite well

    International Nuclear Information System (INIS)

    Aslangul, Claude

    2008-01-01

    I study the time evolution of a particle prepared in the ground state of an infinite well after the latter is suddenly expanded. It turns out that the probability density |Ψ(x, t)| 2 shows up quite a surprising behaviour: for definite times, plateaux appear for which |Ψ(x, t)| 2 is constant on finite intervals for x. Elements of theoretical explanation are given by analysing the singular component of the second derivative ∂ xx Ψ(x, t). Analytical closed expressions are obtained for some specific times, which easily allow us to show that, at these times, the density organizes itself into regular patterns provided the size of the box is large enough; more, above some critical size depending on the specific time, the density patterns are independent of the expansion parameter. It is seen how the density at these times simply results from a construction game with definite rules acting on the pieces of the initial density

  11. Dividend announcements reconsidered: Dividend changes versus dividend surprises

    OpenAIRE

    Andres, Christian; Betzer, André; van den Bongard, Inga; Haesner, Christian; Theissen, Erik

    2012-01-01

    This paper reconsiders the issue of share price reactions to dividend announcements. Previous papers rely almost exclusively on a naive dividend model in which the dividend change is used as a proxy for the dividend surprise. We use the difference between the actual dividend and the analyst consensus forecast as obtained from I/B/E/S as a proxy for the dividend surprise. Using data from Germany, we find significant share price reactions after dividend announcements. Once we control for analys...

  12. Charming surprise

    CERN Multimedia

    Antonella Del Rosso

    2011-01-01

    The CP violation in charm quarks has always been thought to be extremely small. So, looking at particle decays involving matter and antimatter, the LHCb experiment has recently been surprised to observe that things might be different. Theorists are on the case.   The study of the physics of the charm quark was not in the initial plans of the LHCb experiment, whose letter “b” stands for “beauty quark”. However, already one year ago, the Collaboration decided to look into a wider spectrum of processes that involve charm quarks among other things. The LHCb trigger allows a lot of these processes to be selected, and, among them, one has recently shown interesting features. Other experiments at b-factories have already performed the same measurement but this is the first time that it has been possible to achieve such high precision, thanks to the huge amount of data provided by the very high luminosity of the LHC. “We have observed the decay modes of t...

  13. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  14. Salience and attention in surprisal-based accounts of language processing

    Directory of Open Access Journals (Sweden)

    Alessandra eZarcone

    2016-06-01

    Full Text Available The notion of salience has been singled out as the explanatory factor for a diverse range oflinguistic phenomena. In particular, perceptual salience (e.g. visual salience of objects in the world,acoustic prominence of linguistic sounds and semantic-pragmatic salience (e.g. prominence ofrecently mentioned or topical referents have been shown to influence language comprehensionand production. A different line of research has sought to account for behavioral correlates ofcognitive load during comprehension as well as for certain patterns in language usage usinginformation-theoretic notions, such as surprisal. Surprisal and salience both affect languageprocessing at different levels, but the relationship between the two has not been adequatelyelucidated, and the question of whether salience can be reduced to surprisal / predictability isstill open. Our review identifies two main challenges in addressing this question: terminologicalinconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalise upon work in visual cognition inorder to orient ourselves in surveying the different facets of the notion of salience in linguisticsand their relation with models of surprisal. We find that work on salience highlights aspects oflinguistic communication that models of surprisal tend to overlook, namely the role of attentionand relevance to current goals, and we argue that the Predictive Coding framework provides aunified view which can account for the role played by attention and predictability at different levelsof processing and which can clarify the interplay between low and high levels of processes andbetween predictability-driven expectation and attention-driven focus.

  15. The Ultraviolet Surprise. Efficient Soft X-Ray High Harmonic Generation in Multiply-Ionized Plasmas

    International Nuclear Information System (INIS)

    Popmintchev, Dimitar; Hernandez-Garcia, Carlos; Dollar, Franklin; Mancuso, Christopher; Perez-Hernandez, Jose A.; Chen, Ming-Chang; Hankla, Amelia; Gao, Xiaohui; Shim, Bonggu; Gaeta, Alexander L.; Tarazkar, Maryam; Romanov, Dmitri A.; Levis, Robert J.; Gaffney, Jim A.; Foord, Mark; Libby, Stephen B.; Jaron-Becker, Agnieskzka; Becker, Andreas; Plaja, Luis; Muranane, Margaret M.; Kapteyn, Henry C.; Popmintchev, Tenio

    2015-01-01

    High-harmonic generation is a universal response of matter to strong femtosecond laser fields, coherently upconverting light to much shorter wavelengths. Optimizing the conversion of laser light into soft x-rays typically demands a trade-off between two competing factors. Reduced quantum diffusion of the radiating electron wave function results in emission from each species which is highest when a short-wavelength ultraviolet driving laser is used. But, phase matching - the constructive addition of x-ray waves from a large number of atoms - favors longer-wavelength mid-infrared lasers. We identified a regime of high-harmonic generation driven by 40-cycle ultraviolet lasers in waveguides that can generate bright beams in the soft x-ray region of the spectrum, up to photon energies of 280 electron volts. Surprisingly, the high ultraviolet refractive indices of both neutral atoms and ions enabled effective phase matching, even in a multiply ionized plasma. We observed harmonics with very narrow linewidths, while calculations show that the x-rays emerge as nearly time-bandwidt-limited pulse trains of ~100 attoseconds

  16. Viral marketing: the use of surprise

    NARCIS (Netherlands)

    Lindgreen, A.; Vanhamme, J.; Clarke, I.; Flaherty, T.B.

    2005-01-01

    Viral marketing involves consumers passing along a company's marketing message to their friends, family, and colleagues. This chapter reviews viral marketing campaigns and argues that the emotion of surprise often is at work and that this mechanism resembles that of word-of-mouth marketing.

  17. Distinct medial temporal networks encode surprise during motivation by reward versus punishment

    Science.gov (United States)

    Murty, Vishnu P.; LaBar, Kevin S.; Adcock, R. Alison

    2016-01-01

    Adaptive motivated behavior requires predictive internal representations of the environment, and surprising events are indications for encoding new representations of the environment. The medial temporal lobe memory system, including the hippocampus and surrounding cortex, encodes surprising events and is influenced by motivational state. Because behavior reflects the goals of an individual, we investigated whether motivational valence (i.e., pursuing rewards versus avoiding punishments) also impacts neural and mnemonic encoding of surprising events. During functional magnetic resonance imaging (fMRI), participants encountered perceptually unexpected events either during the pursuit of rewards or avoidance of punishments. Despite similar levels of motivation across groups, reward and punishment facilitated the processing of surprising events in different medial temporal lobe regions. Whereas during reward motivation, perceptual surprises enhanced activation in the hippocampus, during punishment motivation surprises instead enhanced activation in parahippocampal cortex. Further, we found that reward motivation facilitated hippocampal coupling with ventromedial PFC, whereas punishment motivation facilitated parahippocampal cortical coupling with orbitofrontal cortex. Behaviorally, post-scan testing revealed that reward, but not punishment, motivation resulted in greater memory selectivity for surprising events encountered during goal pursuit. Together these findings demonstrate that neuromodulatory systems engaged by anticipation of reward and punishment target separate components of the medial temporal lobe, modulating medial temporal lobe sensitivity and connectivity. Thus, reward and punishment motivation yield distinct neural contexts for learning, with distinct consequences for how surprises are incorporated into predictive mnemonic models of the environment. PMID:26854903

  18. Distinct medial temporal networks encode surprise during motivation by reward versus punishment.

    Science.gov (United States)

    Murty, Vishnu P; LaBar, Kevin S; Adcock, R Alison

    2016-10-01

    Adaptive motivated behavior requires predictive internal representations of the environment, and surprising events are indications for encoding new representations of the environment. The medial temporal lobe memory system, including the hippocampus and surrounding cortex, encodes surprising events and is influenced by motivational state. Because behavior reflects the goals of an individual, we investigated whether motivational valence (i.e., pursuing rewards versus avoiding punishments) also impacts neural and mnemonic encoding of surprising events. During functional magnetic resonance imaging (fMRI), participants encountered perceptually unexpected events either during the pursuit of rewards or avoidance of punishments. Despite similar levels of motivation across groups, reward and punishment facilitated the processing of surprising events in different medial temporal lobe regions. Whereas during reward motivation, perceptual surprises enhanced activation in the hippocampus, during punishment motivation surprises instead enhanced activation in parahippocampal cortex. Further, we found that reward motivation facilitated hippocampal coupling with ventromedial PFC, whereas punishment motivation facilitated parahippocampal cortical coupling with orbitofrontal cortex. Behaviorally, post-scan testing revealed that reward, but not punishment, motivation resulted in greater memory selectivity for surprising events encountered during goal pursuit. Together these findings demonstrate that neuromodulatory systems engaged by anticipation of reward and punishment target separate components of the medial temporal lobe, modulating medial temporal lobe sensitivity and connectivity. Thus, reward and punishment motivation yield distinct neural contexts for learning, with distinct consequences for how surprises are incorporated into predictive mnemonic models of the environment. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. On Independence for Capacities with Law of Large Numbers

    OpenAIRE

    Huang, Weihuan

    2017-01-01

    This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.

  20. Automatic trajectory measurement of large numbers of crowded objects

    Science.gov (United States)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  1. Colour by Numbers

    Science.gov (United States)

    Wetherell, Chris

    2017-01-01

    This is an edited extract from the keynote address given by Dr. Chris Wetherell at the 26th Biennial Conference of the Australian Association of Mathematics Teachers Inc. The author investigates the surprisingly rich structure that exists within a simple arrangement of numbers: the times tables.

  2. Salience and Attention in Surprisal-Based Accounts of Language Processing.

    Science.gov (United States)

    Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

    2016-01-01

    The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.

  3. Salience and Attention in Surprisal-Based Accounts of Language Processing

    Science.gov (United States)

    Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

    2016-01-01

    The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus. PMID:27375525

  4. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  5. Numbers their history and meaning

    CERN Document Server

    Flegg, Graham

    2003-01-01

    Readable, jargon-free book examines the earliest endeavors to count and record numbers, initial attempts to solve problems by using equations, and origins of infinite cardinal arithmetic. "Surprisingly exciting." - Choice.

  6. Charming surprise

    CERN Multimedia

    Antonella Del Rosso

    2011-01-01

    The CP violation in charm quarks has always been thought to be extremely small. So, looking at particle decays involving matter and antimatter, the LHCb experiment has recently been surprised to observe that things might be different. Theorists are on the case. The study of the physics of the charm quark was not in the initial plans of the LHCb experiment, whose letter “b” stands for “beauty quark”. However, already one year ago, the Collaboration decided to look into a wider spectrum of processes that involve charm quarks among other things. The LHCb trigger allows a lot of these processes to be selected, and, among them, one has recently shown interesting features. Other experiments at b-factories have already performed the same measurement but this is the first time that it has been possible to achieve such high precision, thanks to the huge amount of data provided by the very high luminosity of the LHC. “We have observed the decay modes of the D0, a pa...

  7. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    Science.gov (United States)

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  8. The Surprise Examination Paradox and the Second Incompleteness Theorem

    OpenAIRE

    Kritchman, Shira; Raz, Ran

    2010-01-01

    We give a new proof for Godel's second incompleteness theorem, based on Kolmogorov complexity, Chaitin's incompleteness theorem, and an argument that resembles the surprise examination paradox. We then go the other way around and suggest that the second incompleteness theorem gives a possible resolution of the surprise examination paradox. Roughly speaking, we argue that the flaw in the derivation of the paradox is that it contains a hidden assumption that one can prove the consistency of the...

  9. A Numeric Scorecard Assessing the Mental Health Preparedness for Large-Scale Crises at College and University Campuses: A Delphi Study

    Science.gov (United States)

    Burgin, Rick A.

    2012-01-01

    Large-scale crises continue to surprise, overwhelm, and shatter college and university campuses. While the devastation to physical plants and persons is often evident and is addressed with crisis management plans, the number of emotional casualties left in the wake of these large-scale crises may not be apparent and are often not addressed with…

  10. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    2016-06-18

    RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...

  11. Self-organizing weights for Internet AS-graphs and surprisingly simple routing metrics

    DEFF Research Database (Denmark)

    Scholz, Jan Carsten; Greiner, Martin

    2011-01-01

    The transport capacity of Internet-like communication networks and hence their efficiency may be improved by a factor of 5–10 through the use of highly optimized routing metrics, as demonstrated previously. The numerical determination of such routing metrics can be computationally demanding...... to an extent that prohibits both investigation of and application to very large networks. In an attempt to find a numerically less expensive way of constructing a metric with a comparable performance increase, we propose a local, self-organizing iteration scheme and find two surprisingly simple and efficient...... metrics. The new metrics have negligible computational cost and result in an approximately 5-fold performance increase, providing distinguished competitiveness with the computationally costly counterparts. They are applicable to very large networks and easy to implement in today's Internet routing...

  12. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  13. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...

  14. Quasi-isodynamic configuration with large number of periods

    International Nuclear Information System (INIS)

    Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.

    2005-01-01

    It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)

  15. Pupil size tracks perceptual content and surprise.

    Science.gov (United States)

    Kloosterman, Niels A; Meindertsma, Thomas; van Loon, Anouk M; Lamme, Victor A F; Bonneh, Yoram S; Donner, Tobias H

    2015-04-01

    Changes in pupil size at constant light levels reflect the activity of neuromodulatory brainstem centers that control global brain state. These endogenously driven pupil dynamics can be synchronized with cognitive acts. For example, the pupil dilates during the spontaneous switches of perception of a constant sensory input in bistable perceptual illusions. It is unknown whether this pupil dilation only indicates the occurrence of perceptual switches, or also their content. Here, we measured pupil diameter in human subjects reporting the subjective disappearance and re-appearance of a physically constant visual target surrounded by a moving pattern ('motion-induced blindness' illusion). We show that the pupil dilates during the perceptual switches in the illusion and a stimulus-evoked 'replay' of that illusion. Critically, the switch-related pupil dilation encodes perceptual content, with larger amplitude for disappearance than re-appearance. This difference in pupil response amplitude enables prediction of the type of report (disappearance vs. re-appearance) on individual switches (receiver-operating characteristic: 61%). The amplitude difference is independent of the relative durations of target-visible and target-invisible intervals and subjects' overt behavioral report of the perceptual switches. Further, we show that pupil dilation during the replay also scales with the level of surprise about the timing of switches, but there is no evidence for an interaction between the effects of surprise and perceptual content on the pupil response. Taken together, our results suggest that pupil-linked brain systems track both the content of, and surprise about, perceptual events. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. The conceptualization model problem—surprise

    Science.gov (United States)

    Bredehoeft, John

    2005-03-01

    The foundation of model analysis is the conceptual model. Surprise is defined as new data that renders the prevailing conceptual model invalid; as defined here it represents a paradigm shift. Limited empirical data indicate that surprises occur in 20-30% of model analyses. These data suggest that groundwater analysts have difficulty selecting the appropriate conceptual model. There is no ready remedy to the conceptual model problem other than (1) to collect as much data as is feasible, using all applicable methods—a complementary data collection methodology can lead to new information that changes the prevailing conceptual model, and (2) for the analyst to remain open to the fact that the conceptual model can change dramatically as more information is collected. In the final analysis, the hydrogeologist makes a subjective decision on the appropriate conceptual model. The conceptualization problem does not render models unusable. The problem introduces an uncertainty that often is not widely recognized. Conceptual model uncertainty is exacerbated in making long-term predictions of system performance. C'est le modèle conceptuel qui se trouve à base d'une analyse sur un modèle. On considère comme une surprise lorsque le modèle est invalidé par des données nouvelles; dans les termes définis ici la surprise est équivalente à un change de paradigme. Des données empiriques limitées indiquent que les surprises apparaissent dans 20 à 30% des analyses effectuées sur les modèles. Ces données suggèrent que l'analyse des eaux souterraines présente des difficultés lorsqu'il s'agit de choisir le modèle conceptuel approprié. Il n'existe pas un autre remède au problème du modèle conceptuel que: (1) rassembler autant des données que possible en utilisant toutes les méthodes applicables—la méthode des données complémentaires peut conduire aux nouvelles informations qui vont changer le modèle conceptuel, et (2) l'analyste doit rester ouvert au fait

  17. Glial heterotopia of maxilla: A clinical surprise

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Mahalik

    2011-01-01

    Full Text Available Glial heterotopia is a rare congenital mass lesion which often presents as a clinical surprise. We report a case of extranasal glial heterotopia in a neonate with unusual features. The presentation, management strategy, etiopathogenesis and histopathology of the mass lesion has been reviewed.

  18. Beyond surprise : A longitudinal study on the experience of visual-tactual incongruities in products

    NARCIS (Netherlands)

    Ludden, G.D.S.; Schifferstein, H.N.J.; Hekkert, P.

    2012-01-01

    When people encounter products with visual-tactual incongruities, they are likely to be surprised because the product feels different than expected. In this paper, we investigate (1) the relationship between surprise and the overall liking of the products, (2) the emotions associated with surprise,

  19. Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers

    OpenAIRE

    Govardhan, RN; Arakeri, JH

    2011-01-01

    Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...

  20. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  1. X rays and radioactivity: a complete surprise

    International Nuclear Information System (INIS)

    Radvanyi, P.; Bordry, M.

    1995-01-01

    The discoveries of X rays and of radioactivity came as complete experimental surprises; the physicists, at that time, had no previous hint of a possible structure of atoms. It is difficult now, knowing what we know, to replace ourselves in the spirit, astonishment and questioning of these years, between 1895 and 1903. The nature of X rays was soon hypothesized, but the nature of the rays emitted by uranium, polonium and radium was much more difficult to disentangle, as they were a mixture of different types of radiations. The origin of the energy continuously released in radioactivity remained a complete mystery for a few years. The multiplicity of the radioactive substances became soon a difficult matter: what was real and what was induced ? Isotopy was still far ahead. It appeared that some radioactive substances had ''half-lifes'': were they genuine radioactive elements or was it just a transitory phenomenon ? Henri Becquerel (in 1900) and Pierre and Marie Curie (in 1902) hesitated on the correct answer. Only after Ernest Rutherford and Frederick Soddy established that radioactivity was the transmutation of one element into another, could one understand that a solid element transformed into a gaseous element, which in turn transformed itself into a succession of solid radioactive elements. It was only in 1913 - after the discovery of the atomic nucleus -, through precise measurements of X ray spectra, that Henry Moseley showed that the number of electrons of a given atom - and the charge of its nucleus - was equal to its atomic number in the periodic table. (authors)

  2. X rays and radioactivity: a complete surprise

    Energy Technology Data Exchange (ETDEWEB)

    Radvanyi, P. [Laboratoire National Saturne, Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France); Bordry, M. [Institut du Radium, 75 - Paris (France)

    1995-12-31

    The discoveries of X rays and of radioactivity came as complete experimental surprises; the physicists, at that time, had no previous hint of a possible structure of atoms. It is difficult now, knowing what we know, to replace ourselves in the spirit, astonishment and questioning of these years, between 1895 and 1903. The nature of X rays was soon hypothesized, but the nature of the rays emitted by uranium, polonium and radium was much more difficult to disentangle, as they were a mixture of different types of radiations. The origin of the energy continuously released in radioactivity remained a complete mystery for a few years. The multiplicity of the radioactive substances became soon a difficult matter: what was real and what was induced ? Isotopy was still far ahead. It appeared that some radioactive substances had ``half-lifes``: were they genuine radioactive elements or was it just a transitory phenomenon ? Henri Becquerel (in 1900) and Pierre and Marie Curie (in 1902) hesitated on the correct answer. Only after Ernest Rutherford and Frederick Soddy established that radioactivity was the transmutation of one element into another, could one understand that a solid element transformed into a gaseous element, which in turn transformed itself into a succession of solid radioactive elements. It was only in 1913 - after the discovery of the atomic nucleus -, through precise measurements of X ray spectra, that Henry Moseley showed that the number of electrons of a given atom - and the charge of its nucleus - was equal to its atomic number in the periodic table. (authors).

  3. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  4. A Statistical Analysis of the Relationship between Harmonic Surprise and Preference in Popular Music.

    Science.gov (United States)

    Miles, Scott A; Rosen, David S; Grzywacz, Norberto M

    2017-01-01

    Studies have shown that some musical pieces may preferentially activate reward centers in the brain. Less is known, however, about the structural aspects of music that are associated with this activation. Based on the music cognition literature, we propose two hypotheses for why some musical pieces are preferred over others. The first, the Absolute-Surprise Hypothesis, states that unexpected events in music directly lead to pleasure. The second, the Contrastive-Surprise Hypothesis, proposes that the juxtaposition of unexpected events and subsequent expected events leads to an overall rewarding response. We tested these hypotheses within the framework of information theory, using the measure of "surprise." This information-theoretic variable mathematically describes how improbable an event is given a known distribution. We performed a statistical investigation of surprise in the harmonic structure of songs within a representative corpus of Western popular music, namely, the McGill Billboard Project corpus. We found that chords of songs in the top quartile of the Billboard chart showed greater average surprise than those in the bottom quartile. We also found that the different sections within top-quartile songs varied more in their average surprise than the sections within bottom-quartile songs. The results of this study are consistent with both the Absolute- and Contrastive-Surprise Hypotheses. Although these hypotheses seem contradictory to one another, we cannot yet discard the possibility that both absolute and contrastive types of surprise play roles in the enjoyment of popular music. We call this possibility the Hybrid-Surprise Hypothesis. The results of this statistical investigation have implications for both music cognition and the human neural mechanisms of esthetic judgments.

  5. Ignorance, Vulnerability and the Occurrence of "Radical Surprises": Theoretical Reflections and Empirical Findings

    Science.gov (United States)

    Kuhlicke, C.

    2009-04-01

    By definition natural disasters always contain a moment of surprise. Their occurrence is mostly unforeseen and unexpected. They hit people unprepared, overwhelm them and expose their helplessness. Yet, there is surprisingly little known on the reasons for their being surprised. Aren't natural disasters expectable and foreseeable after all? Aren't the return rates of most hazards well known and shouldn't people be better prepared? The central question of this presentation is hence: Why do natural disasters so often radically surprise people at all (and how can we explain this being surprised)? In the first part of the presentation, it is argued that most approaches to vulnerability are not able to grasp this moment of surprise. On the contrary, they have their strength in unravelling the expectable: A person who is marginalized or even oppressed in everyday life is also vulnerable during times of crisis and stress, at least this is the central assumption of most vulnerability studies. In the second part, an understanding of vulnerability is developed, which allows taking into account such radical surprises. First, two forms of the unknown are differentiated: An area of the unknown an actor is more or less aware of (ignorance), and an area, which is not even known to be not known (nescience). The discovery of the latter is mostly associated with a "radical surprise", since it is per definition impossible to prepare for it. Second, a definition of vulnerability is proposed, which allows capturing the dynamics of surprises: People are vulnerable when they discover their nescience exceeding by definition previously established routines, stocks of knowledge and resources—in a general sense their capacities—to deal with their physical and/or social environment. This definition explicitly takes the view of different actors serious and departs from their being surprised. In the third part findings of a case study are presented, the 2002 flood in Germany. It is shown

  6. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    Science.gov (United States)

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  7. Decision-making under surprise and uncertainty: Arsenic contamination of water supplies

    Science.gov (United States)

    Randhir, Timothy O.; Mozumder, Pallab; Halim, Nafisa

    2018-05-01

    With ignorance and potential surprise dominating decision making in water resources, a framework for dealing with such uncertainty is a critical need in hydrology. We operationalize the 'potential surprise' criterion proposed by Shackle, Vickers, and Katzner (SVK) to derive decision rules to manage water resources under uncertainty and ignorance. We apply this framework to managing water supply systems in Bangladesh that face severe, naturally occurring arsenic contamination. The uncertainty involved with arsenic in water supplies makes the application of conventional analysis of decision-making ineffective. Given the uncertainty and surprise involved in such cases, we find that optimal decisions tend to favor actions that avoid irreversible outcomes instead of conventional cost-effective actions. We observe that a diversification of the water supply system also emerges as a robust strategy to avert unintended outcomes of water contamination. Shallow wells had a slight higher optimal level (36%) compare to deep wells and surface treatment which had allocation levels of roughly 32% under each. The approach can be applied in a variety of other cases that involve decision making under uncertainty and surprise, a frequent situation in natural resources management.

  8. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  9. The Value of Change: Surprises and Insights in Stellar Evolution

    Science.gov (United States)

    Bildsten, Lars

    2018-01-01

    Astronomers with large-format cameras regularly scan the sky many times per night to detect what's changing, and telescopes in space such as Kepler and, soon, TESS obtain very accurate brightness measurements of nearly a million stars over time periods of years. These capabilities, in conjunction with theoretical and computational efforts, have yielded surprises and remarkable new insights into the internal properties of stars and how they end their lives. I will show how asteroseismology reveals the properties of the deep interiors of red giants, and highlight how astrophysical transients may be revealing unusual thermonuclear outcomes from exploding white dwarfs and the births of highly magnetic neutron stars. All the while, stellar science has been accelerated by the availability of open source tools, such as Modules for Experiments in Stellar Astrophysics (MESA), and the nearly immediate availability of observational results.

  10. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  11. Characterization of General TCP Traffic under a Large Number of Flows Regime

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M

    2002-01-01

    .... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...

  12. Modified large number theory with constant G

    International Nuclear Information System (INIS)

    Recami, E.

    1983-01-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic

  13. Sleeping beauties in theoretical physics 26 surprising insights

    CERN Document Server

    Padmanabhan, Thanu

    2015-01-01

    This book addresses a fascinating set of questions in theoretical physics which will both entertain and enlighten all students, teachers and researchers and other physics aficionados. These range from Newtonian mechanics to quantum field theory and cover several puzzling issues that do not appear in standard textbooks. Some topics cover conceptual conundrums, the solutions to which lead to surprising insights; some correct popular misconceptions in the textbook discussion of certain topics; others illustrate deep connections between apparently unconnected domains of theoretical physics; and a few provide remarkably simple derivations of results which are not often appreciated. The connoisseur of theoretical physics will enjoy a feast of pleasant surprises skilfully prepared by an internationally acclaimed theoretical physicist. Each topic is introduced with proper background discussion and special effort is taken to make the discussion self-contained, clear and comprehensible to anyone with an undergraduate e...

  14. The June surprises: balls, strikes, and the fog of war.

    Science.gov (United States)

    Fried, Charles

    2013-04-01

    At first, few constitutional experts took seriously the argument that the Patient Protection and Affordable Care Act exceeded Congress's power under the commerce clause. The highly political opinions of two federal district judges - carefully chosen by challenging plaintiffs - of no particular distinction did not shake that confidence that the act was constitutional. This disdain for the challengers' arguments was only confirmed when the act was upheld by two highly respected conservative court of appeals judges in two separate circuits. But after the hostile, even mocking questioning of the government's advocate in the Supreme Court by the five Republican-appointed justices, the expectation was that the act would indeed be struck down on that ground. So it came as no surprise when the five opined the act did indeed exceed Congress's commerce clause power. But it came as a great surprise when Chief Justice John Roberts, joined by the four Democrat-appointed justices, ruled that the act could be sustained as an exercise of Congress's taxing power - a ground urged by the government almost as an afterthought. It was further surprising, even shocking, that Justices Antonin Scalia, Anthony Kennedy, Clarence Thomas, and Samuel Alito not only wrote a joint opinion on the commerce clause virtually identical to that of their chief, but that in writing it they did not refer to or even acknowledge his opinion. Finally surprising was the fact that Justices Ruth Bader Ginsburg and Stephen Breyer joined the chief in holding that aspects of the act's Medicaid expansion were unconstitutional. This essay ponders and tries to unravel some of these puzzles.

  15. Breakdown of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  16. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  17. Triangular Numbers, Gaussian Integers, and KenKen

    Science.gov (United States)

    Watkins, John J.

    2012-01-01

    Latin squares form the basis for the recreational puzzles sudoku and KenKen. In this article we show how useful several ideas from number theory are in solving a KenKen puzzle. For example, the simple notion of triangular number is surprisingly effective. We also introduce a variation of KenKen that uses the Gaussian integers in order to…

  18. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  19. Distinct medial temporal networks encode surprise during motivation by reward versus punishment

    OpenAIRE

    Murty, Vishnu P.; LaBar, Kevin S.; Adcock, R. Alison

    2016-01-01

    Adaptive motivated behavior requires predictive internal representations of the environment, and surprising events are indications for encoding new representations of the environment. The medial temporal lobe memory system, including the hippocampus and surrounding cortex, encodes surprising events and is influenced by motivational state. Because behavior reflects the goals of an individual, we investigated whether motivational valence (i.e., pursuing rewards versus avoiding punishments) also...

  20. Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk

    International Nuclear Information System (INIS)

    Milinazzo, F.; Saffman, P.G.

    1977-01-01

    The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster

  1. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  2. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  3. The large number hypothesis and Einstein's theory of gravitation

    International Nuclear Information System (INIS)

    Yun-Kau Lau

    1985-01-01

    In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch

  4. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  5. Managing Uncertainity: Soviet Views on Deception, Surprise, and Control

    National Research Council Canada - National Science Library

    Hull, Andrew

    1989-01-01

    .... In the first two cases (deception and surprise), the emphasis is on how the Soviets seek to sow uncertainty in the minds of the enemy and how the Soviets then plan to use that uncertainty to gain military advantage...

  6. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    Science.gov (United States)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  7. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  8. Effects of surprisal and locality on Danish sentence processing

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Kizach, Johannes

    2017-01-01

    An eye-tracking experiment in Danish investigates two dominant accounts of sentence processing: locality-based theories that predict a processing advantage for sentences where the distance between the major syntactic heads is minimized, and the surprisal theory which predicts that processing time...

  9. Law of Large Numbers: the Theory, Applications and Technology-based Education.

    Science.gov (United States)

    Dinov, Ivo D; Christou, Nicolas; Gould, Robert

    2009-03-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).

  10. Break down of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1995-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  11. The lore of large numbers: some historical background to the anthropic principle

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1981-01-01

    A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)

  12. Conformal window in QCD for large numbers of colors and flavors

    International Nuclear Information System (INIS)

    Zhitnitsky, Ariel R.

    2014-01-01

    We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase

  13. Numbers and other math ideas come alive

    CERN Document Server

    Pappas, Theoni

    2012-01-01

    Most people don't think about numbers, or take them for granted. For the average person numbers are looked upon as cold, clinical, inanimate objects. Math ideas are viewed as something to get a job done or a problem solved. Get ready for a big surprise with Numbers and Other Math Ideas Come Alive. Pappas explores mathematical ideas by looking behind the scenes of what numbers, points, lines, and other concepts are saying and thinking. In each story, properties and characteristics of math ideas are entertainingly uncovered and explained through the dialogues and actions of its math

  14. A modified large number theory with constant G

    Science.gov (United States)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  15. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    Science.gov (United States)

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  16. Things may not be as expected: Surprising findings when updating ...

    African Journals Online (AJOL)

    2015-05-14

    May 14, 2015 ... Things may not be as expected: Surprising findings when updating .... (done at the end of three months after the first review month) ..... Allen G. Getting beyond form filling: The role of institutional governance in human research ...

  17. Large Data Set Mining

    NARCIS (Netherlands)

    Leemans, I.B.; Broomhall, Susan

    2017-01-01

    Digital emotion research has yet to make history. Until now large data set mining has not been a very active field of research in early modern emotion studies. This is indeed surprising since first, the early modern field has such rich, copyright-free, digitized data sets and second, emotion studies

  18. Estimations of expectedness and potential surprise in possibility theory

    Science.gov (United States)

    Prade, Henri; Yager, Ronald R.

    1992-01-01

    This note investigates how various ideas of 'expectedness' can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying 'I would not be surprised that...' before an event takes place, or by saying 'I knew it' after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined.

  19. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  20. Automation surprise : results of a field survey of Dutch pilots

    NARCIS (Netherlands)

    de Boer, R.J.; Hurts, Karel

    2017-01-01

    Automation surprise (AS) has often been associated with aviation safety incidents. Although numerous laboratory studies have been conducted, few data are available from routine flight operations. A survey among a representative sample of 200 Dutch airline pilots was used to determine the prevalence

  1. Models of Automation Surprise: Results of a Field Survey in Aviation

    Directory of Open Access Journals (Sweden)

    Robert De Boer

    2017-09-01

    Full Text Available Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration with automation. In this paper, we report the results of a field study that empirically compared and contrasted two models of automation surprises: a normative individual-cognition model and a sensemaking model based on distributed cognition. Our data prove a good fit for the sense-making model. This finding is relevant for aviation safety, since our understanding of the cognitive processes that govern human interaction with automation drive what we need to do to reduce the frequency of automation-induced events.

  2. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  3. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    Science.gov (United States)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  4. ORMS IN SURPRISING PLACES: CLINICAL AND MORPHOLOGICAL FEATURES

    Directory of Open Access Journals (Sweden)

    Myroshnychenko MS

    2013-06-01

    Full Text Available Helminthes are the most common human diseases, which are characterized by involvement in the pathological process of all organs and systems. In this article, the authors discuss a few cases of typical and atypical localizations for parasitic worms such as filarial and pinworms which were recovered from surprising places in the bodies of patients in Kharkiv region. This article will allow the doctors of practical health care to pay special attention to the timely prevention and diagnostics of this pathology.

  5. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  6. Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis

    International Nuclear Information System (INIS)

    Qadir, A.; Mufti, A.A.

    1980-07-01

    Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)

  7. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  8. Primary Care Practice: Uncertainty and Surprise

    Science.gov (United States)

    Crabtree, Benjamin F.

    I will focus my comments on uncertainty and surprise in primary care practices. I am a medical anthropologist by training, and have been a full-time researcher in family medicine for close to twenty years. In this talk I want to look at primary care practices as complex systems, particularly taking the perspective of translating evidence into practice. I am going to discuss briefly the challenges we have in primary care, and in medicine in general, of translating new evidence into the everyday care of patients. To do this, I will look at two studies that we have conducted on family practices, then think about how practices can be best characterized as complex adaptive systems. Finally, I will focus on the implications of this portrayal for disseminating new knowledge into practice.

  9. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  10. The large numbers hypothesis and the Einstein theory of gravitation

    International Nuclear Information System (INIS)

    Dirac, P.A.M.

    1979-01-01

    A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)

  11. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  12. What is a surprise earthquake? The example of the 2002, San Giuliano (Italy event

    Directory of Open Access Journals (Sweden)

    M. Mucciarelli

    2005-06-01

    Full Text Available Both in scientific literature and in the mass media, some earthquakes are defined as «surprise earthquakes». Based on his own judgment, probably any geologist, seismologist or engineer may have his own list of past «surprise earthquakes». This paper tries to quantify the underlying individual perception that may lead a scientist to apply such a definition to a seismic event. The meaning is different, depending on the disciplinary approach. For geologists, the Italian database of seismogenic sources is still too incomplete to allow for a quantitative estimate of the subjective degree of belief. For seismologists, quantification is possible defining the distance between an earthquake and its closest previous neighbor. Finally, for engineers, the San Giuliano quake could not be considered a surprise, since probabilistic site hazard estimates reveal that the change before and after the earthquake is just 4%.

  13. Analysis of physiological signals for recognition of boredom, pain, and surprise emotions.

    Science.gov (United States)

    Jang, Eun-Hye; Park, Byoung-Jun; Park, Mi-Sook; Kim, Sang-Hyeob; Sohn, Jin-Hun

    2015-06-18

    The aim of the study was to examine the differences of boredom, pain, and surprise. In addition to that, it was conducted to propose approaches for emotion recognition based on physiological signals. Three emotions, boredom, pain, and surprise, are induced through the presentation of emotional stimuli and electrocardiography (ECG), electrodermal activity (EDA), skin temperature (SKT), and photoplethysmography (PPG) as physiological signals are measured to collect a dataset from 217 participants when experiencing the emotions. Twenty-seven physiological features are extracted from the signals to classify the three emotions. The discriminant function analysis (DFA) as a statistical method, and five machine learning algorithms (linear discriminant analysis (LDA), classification and regression trees (CART), self-organizing map (SOM), Naïve Bayes algorithm, and support vector machine (SVM)) are used for classifying the emotions. The result shows that the difference of physiological responses among emotions is significant in heart rate (HR), skin conductance level (SCL), skin conductance response (SCR), mean skin temperature (meanSKT), blood volume pulse (BVP), and pulse transit time (PTT), and the highest recognition accuracy of 84.7% is obtained by using DFA. This study demonstrates the differences of boredom, pain, and surprise and the best emotion recognizer for the classification of the three emotions by using physiological signals.

  14. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  15. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  16. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  17. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  18. Teacher Supply and Demand: Surprises from Primary Research

    Directory of Open Access Journals (Sweden)

    Andrew J. Wayne

    2000-09-01

    Full Text Available An investigation of primary research studies on public school teacher supply and demand revealed four surprises. Projections show that enrollments are leveling off. Relatedly, annual hiring increases should be only about two or three percent over the next few years. Results from studies of teacher attrition also yield unexpected results. Excluding retirements, only about one in 20 teachers leaves each year, and the novice teachers who quit mainly cite personal and family reasons, not job dissatisfaction. Each of these findings broadens policy makers' options for teacher supply.

  19. Conference of “Uncertainty and Surprise: Questions on Working with the Unexpected and Unknowable”

    CERN Document Server

    McDaniel, Reuben R; Uncertainty and Surprise in Complex Systems : Questions on Working with the Unexpected

    2005-01-01

    Complexity science has been a source of new insight in physical and social systems and has demonstrated that unpredictability and surprise are fundamental aspects of the world around us. This book is the outcome of a discussion meeting of leading scholars and critical thinkers with expertise in complex systems sciences and leaders from a variety of organizations sponsored by the Prigogine Center at The University of Texas at Austin and the Plexus Institute to explore strategies for understanding uncertainty and surprise. Besides distributions to the conference it includes a key digest by the editors as well as a commentary by the late nobel laureat Ilya Prigogine, "Surprises in half of a century". The book is intended for researchers and scientists in complexity science as well as for a broad interdisciplinary audience of both practitioners and scholars. It will well serve those interested in the research issues and in the application of complexity science to physical and social systems.

  20. Models of Automation surprise : results of a field survey in aviation

    NARCIS (Netherlands)

    De Boer, Robert; Dekker, Sidney

    2017-01-01

    Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration

  1. Particle creation and Dirac's large number hypothesis; and Reply

    International Nuclear Information System (INIS)

    Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.

    1976-01-01

    The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)

  2. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  3. Law of large numbers and central limit theorem for randomly forced PDE's

    CERN Document Server

    Shirikyan, A

    2004-01-01

    We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.

  4. Surprising results: HIV testing and changes in contraceptive practices among young women in Malawi

    Science.gov (United States)

    Sennott, Christie; Yeatman, Sara

    2015-01-01

    This study uses eight waves of data from the population-based Tsogolo la Thanzi study (2009–2011) in rural Malawi to examine changes in young women’s contraceptive practices, including the use of condoms, non-barrier contraceptive methods, and abstinence, following positive and negative HIV tests. The analysis factors in women’s prior perceptions of their HIV status that may already be shaping their behaviour and separates surprise HIV test results from those that merely confirm what was already believed. Fixed effects logistic regression models show that HIV testing frequently affects the contraceptive practices of young Malawian women, particularly when the test yields an unexpected result. Specifically, women who are surprised to test HIV positive increase their condom use and are more likely to use condoms consistently. Following an HIV negative test (whether a surprise or expected), women increase their use of condoms and decrease their use of non-barrier contraceptives; the latter may be due to an increase in abstinence following a surprise negative result. Changes in condom use following HIV testing are robust to the inclusion of potential explanatory mechanisms including fertility preferences, relationship status, and the perception that a partner is HIV positive. The results demonstrate that both positive and negative tests can influence women’s sexual and reproductive behaviours, and emphasise the importance of conceptualizing of HIV testing as offering new information only insofar as results deviate from prior perceptions of HIV status. PMID:26160156

  5. On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; Makowski, Armand M

    2005-01-01

    .... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...

  6. Would you be surprised if this patient died?: Preliminary exploration of first and second year residents' approach to care decisions in critically ill patients

    Directory of Open Access Journals (Sweden)

    Armstrong John D

    2003-01-01

    Full Text Available Abstract Background How physicians approach decision-making when caring for critically ill patients is poorly understood. This study aims to explore how residents think about prognosis and approach care decisions when caring for seriously ill, hospitalized patients. Methods Qualitative study where we conducted structured discussions with first and second year internal medicine residents (n = 8 caring for critically ill patients during Medical Intensive Care Unit Ethics and Discharge Planning Rounds. Residents were asked to respond to questions beginning with "Would you be surprised if this patient died?" Results An equal number of residents responded that they would (n = 4 or would not (n = 4 be surprised if their patient died. Reasons for being surprised included the rapid onset of an acute illness, reversible disease, improving clinical course and the patient's prior survival under similar circumstances. Residents reported no surprise with worsening clinical course. Based on the realization that their patient might die, residents cited potential changes in management that included clarifying treatment goals, improving communication with families, spending more time with patients and ordering fewer laboratory tests. Perceived or implied barriers to changes in management included limited time, competing clinical priorities, "not knowing" a patient, limited knowledge and experience, presence of diagnostic or prognostic uncertainty and unclear treatment goals. Conclusions These junior-level residents appear to rely on clinical course, among other factors, when assessing prognosis and the possibility for death in severely ill patients. Further investigation is needed to understand how these factors impact decision-making and whether perceived barriers to changes in patient management influence approaches to care.

  7. The effect of emotionally valenced eye region images on visuocortical processing of surprised faces.

    Science.gov (United States)

    Li, Shuaixia; Li, Ping; Wang, Wei; Zhu, Xiangru; Luo, Wenbo

    2018-05-01

    In this study, we presented pictorial representations of happy, neutral, and fearful expressions projected in the eye regions to determine whether the eye region alone is sufficient to produce a context effect. Participants were asked to judge the valence of surprised faces that had been preceded by a picture of an eye region. Behavioral results showed that affective ratings of surprised faces were context dependent. Prime-related ERPs with presentation of happy eyes elicited a larger P1 than those for neutral and fearful eyes, likely due to the recognition advantage provided by a happy expression. Target-related ERPs showed that surprised faces in the context of fearful and happy eyes elicited dramatically larger C1 than those in the neutral context, which reflected the modulation by predictions during the earliest stages of face processing. There were larger N170 with neutral and fearful eye contexts compared to the happy context, suggesting faces were being integrated with contextual threat information. The P3 component exhibited enhanced brain activity in response to faces preceded by happy and fearful eyes compared with neutral eyes, indicating motivated attention processing may be involved at this stage. Altogether, these results indicate for the first time that the influence of isolated eye regions on the perception of surprised faces involves preferential processing at the early stages and elaborate processing at the late stages. Moreover, higher cognitive processes such as predictions and attention can modulate face processing from the earliest stages in a top-down manner. © 2017 Society for Psychophysiological Research.

  8. On the surprising rigidity of the Pauli exclusion principle

    International Nuclear Information System (INIS)

    Greenberg, O.W.

    1989-01-01

    I review recent attempts to construct a local quantum field theory of small violations of the Pauli exclusion principle and suggest a qualitative reason for the surprising rigidity of the Pauli principle. I suggest that small violations can occur in our four-dimensional world as a consequence of the compactification of a higher-dimensional theory in which the exclusion principle is exactly valid. I briefly mention a recent experiment which places a severe limit on possible violations of the exclusion principle. (orig.)

  9. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...

  10. Risk, surprises and black swans fundamental ideas and concepts in risk assessment and risk management

    CERN Document Server

    Aven, Terje

    2014-01-01

    Risk, Surprises and Black Swans provides an in depth analysis of the risk concept with a focus on the critical link to knowledge; and the lack of knowledge, that risk and probability judgements are based on.Based on technical scientific research, this book presents a new perspective to help you understand how to assess and manage surprising, extreme events, known as 'Black Swans'. This approach looks beyond the traditional probability-based principles to offer a broader insight into the important aspects of uncertain events and in doing so explores the ways to manage them.

  11. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  12. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  13. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    Energy Technology Data Exchange (ETDEWEB)

    Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.

  14. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Recreating Raven's: software for systematically generating large numbers of Raven-like matrix problems with normed properties.

    Science.gov (United States)

    Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E

    2010-05-01

    Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.

  16. New feature for an old large number

    International Nuclear Information System (INIS)

    Novello, M.; Oliveira, L.R.A.

    1986-01-01

    A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt

  17. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  18. How to use the Fast Fourier Transform in Large Finite Fields

    OpenAIRE

    Petersen, Petur Birgir

    2011-01-01

    The article contents suggestions on how to perform the Fast Fourier Transform over Large Finite Fields. The technique is to use the fact that the multiplicative groups of specific prime fields are surprisingly composite.

  19. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  20. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  1. Stars Form Surprisingly Close to Milky Way's Black Hole

    Science.gov (United States)

    2005-10-01

    million low mass, sun-like stars in and around the ring, whereas in the disk model, the number of low mass stars could be much less. Nayakshin and his coauthor, Rashid Sunyaev of the Max Plank Institute for Physics in Garching, Germany, used Chandra observations to compare the X-ray glow from the region around Sgr A* to the X-ray emission from thousands of young stars in the Orion Nebula star cluster. They found that the Sgr A* star cluster contains only about 10,000 low mass stars, thereby ruling out the migration model. "We can now say that the stars around Sgr A* were not deposited there by some passing star cluster, rather they were born there," said Sunyaev . "There have been theories that this was possible, but this is the first real evidence. Many scientists are going to be very surprised by these results." Because the Galactic Center is shrouded in dust and gas, it has not been possible to look for the low-mass stars in optical observations. In contrast, X-ray data have allowed astronomers to penetrate the veil of gas and dust and look for these low mass stars. Scenario Dismissed by Chandra Results Scenario Dismissed by Chandra Results "In one of the most inhospitable places in our Galaxy, stars have prevailed," said Nayakshin. "It appears that star formation is much more tenacious than we previously believed." The results suggest that the "rules" of star formation change when stars form in the disk of a giant black hole. Because this environment is very different from typical star formation regions, there is a change in the proportion of stars that form. For example, there is a much higher percentage of massive stars in the disks around black holes. And, when these massive stars explode as supernovae, they will "fertilize" the region with heavy elements such as oxygen. This may explain the large amounts of such elements observed in the disks of young supermassive black holes. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for

  2. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    Science.gov (United States)

    2014-01-01

    Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239

  3. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    NARCIS (Netherlands)

    Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.

    2006-01-01

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods

  4. A simple biosynthetic pathway for large product generation from small substrate amounts

    Science.gov (United States)

    Djordjevic, Marko; Djordjevic, Magdalena

    2012-10-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways.

  5. A simple biosynthetic pathway for large product generation from small substrate amounts

    Energy Technology Data Exchange (ETDEWEB)

    Djordjevic, Marko [Institute of Physiology and Biochemistry, Faculty of Biology, University of Belgrade (Serbia); Djordjevic, Magdalena [Institute of Physics Belgrade, University of Belgrade (Serbia)

    2012-10-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways. (paper)

  6. A simple biosynthetic pathway for large product generation from small substrate amounts

    International Nuclear Information System (INIS)

    Djordjevic, Marko; Djordjevic, Magdalena

    2012-01-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways. (paper)

  7. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  8. Phases of a stack of membranes in a large number of dimensions of configuration space

    Science.gov (United States)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  9. A Shocking Surprise in Stephan's Quintet

    Science.gov (United States)

    2006-01-01

    This false-color composite image of the Stephan's Quintet galaxy cluster clearly shows one of the largest shock waves ever seen (green arc). The wave was produced by one galaxy falling toward another at speeds of more than one million miles per hour. The image is made up of data from NASA's Spitzer Space Telescope and a ground-based telescope in Spain. Four of the five galaxies in this picture are involved in a violent collision, which has already stripped most of the hydrogen gas from the interiors of the galaxies. The centers of the galaxies appear as bright yellow-pink knots inside a blue haze of stars, and the galaxy producing all the turmoil, NGC7318b, is the left of two small bright regions in the middle right of the image. One galaxy, the large spiral at the bottom left of the image, is a foreground object and is not associated with the cluster. The titanic shock wave, larger than our own Milky Way galaxy, was detected by the ground-based telescope using visible-light wavelengths. It consists of hot hydrogen gas. As NGC7318b collides with gas spread throughout the cluster, atoms of hydrogen are heated in the shock wave, producing the green glow. Spitzer pointed its infrared spectrograph at the peak of this shock wave (middle of green glow) to learn more about its inner workings. This instrument breaks light apart into its basic components. Data from the instrument are referred to as spectra and are displayed as curving lines that indicate the amount of light coming at each specific wavelength. The Spitzer spectrum showed a strong infrared signature for incredibly turbulent gas made up of hydrogen molecules. This gas is caused when atoms of hydrogen rapidly pair-up to form molecules in the wake of the shock wave. Molecular hydrogen, unlike atomic hydrogen, gives off most of its energy through vibrations that emit in the infrared. This highly disturbed gas is the most turbulent molecular hydrogen ever seen. Astronomers were surprised not only by the turbulence

  10. Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?

    OpenAIRE

    Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.

    2013-01-01

    Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...

  11. Direct and large eddy simulation of turbulent heat transfer at very low Prandtl number: Application to lead–bismuth flows

    International Nuclear Information System (INIS)

    Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.

    2012-01-01

    Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.

  12. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  13. X-rays from comets - a surprising discovery

    CERN Document Server

    CERN. Geneva

    2000-01-01

    Comets are kilometre-size aggregates of ice and dust, which remained from the formation of the solar system. It was not obvious to expect X-ray emission from such objects. Nevertheless, when comet Hyakutake (C/1996 B2) was observed with the ROSAT X-ray satellite during its close approach to Earth in March 1996, bright X-ray emission from this comet was discovered. This finding triggered a search in archival ROSAT data for comets, which might have accidentally crossed the field of view during observations of unrelated targets. To increase the surprise even more, X-ray emission was detected from four additional comets, which were optically 300 to 30 000 times fainter than Hyakutake. For one of them, comet Arai (C/1991 A2), X-ray emission was even found in data which were taken six weeks before the comet was optically discovered. These findings showed that comets represent a new class of celestial X-ray sources. The subsequent detection of X-ray emission from several other comets in dedicated observations confir...

  14. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  15. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    Science.gov (United States)

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified

  16. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  17. Summary of experience from a large number of construction inspections; Wind power plant projects; Erfarenhetsaaterfoering fraan entreprenadbesiktningar

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Bertil; Holmberg, Rikard

    2010-08-15

    This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack

  18. A Genome-Wide Association Study in Large White and Landrace Pig Populations for Number Piglets Born Alive

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935

  19. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  20. Bagpipes and Artichokes: Surprise as a Stimulus to Learning in the Elementary Music Classroom

    Science.gov (United States)

    Jacobi, Bonnie Schaffhauser

    2016-01-01

    Incorporating surprise into music instruction can stimulate student attention, curiosity, and interest. Novelty focuses attention in the reticular activating system, increasing the potential for brain memory storage. Elementary ages are ideal for introducing novel instruments, pieces, composers, or styles of music. Young children have fewer…

  1. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  2. Large picture archiving and communication systems of the world--Part 2.

    Science.gov (United States)

    Bauman, R A; Gell, G; Dwyer, S J

    1996-11-01

    A survey of 82 institutions worldwide was done in 1995 to identify large picture archiving and communication systems (PACS) in clinical operation. A continuing strong trend toward the creation and operation of large PACS was identified. In the 15 months since the first such survey the number of clinical large PACS went from 13 to 23, almost a doubling in that short interval. New systems were added in Asia, Europe, and North America. A strong move to primary interpretation from soft copy was identified, and filmless radiology has become a reality. Workstations for interpretation reside mainly within radiology, but one-third of reporting PACS have more than 20 workstations outside of radiology. Fiber distributed data interface networks were the most numerous, but a variety of networks was reported to be in use. Replies on various display times showed surprisingly good, albeit diverse, speeds. The planned archive length of many systems was 60 months, with usually more than 1 year of data on-line. The main large archive and off-line storage media for these systems were optical disks and magneto-optical disks. Compression was not used before interpretation in most cases, but many systems used 2.5:1 compression for on-line, interpreted cases and 10:1 compression for longer-term archiving. A move to digital imaging and communication in medicine interface usage was identified.

  3. The Educational Philosophies of Mordecai Kaplan and Michael Rosenak: Surprising Similarities and Illuminating Differences

    Science.gov (United States)

    Schein, Jeffrey; Caplan, Eric

    2014-01-01

    The thoughts of Mordecai Kaplan and Michael Rosenak present surprising commonalities as well as illuminating differences. Similarities include the perception that Judaism and Jewish education are in crisis, the belief that Jewish peoplehood must include commitment to meaningful content, the need for teachers to teach from a position of…

  4. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  5. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  6. Probabilities the little numbers that rule our lives

    CERN Document Server

    Olofsson, Peter

    2014-01-01

    Praise for the First Edition"If there is anything you want to know, or remind yourself, about probabilities, then look no further than this comprehensive, yet wittily written and enjoyable, compendium of how to apply probability calculations in real-world situations."- Keith Devlin, Stanford University, National Public Radio's "Math Guy" and author of The Math Gene and The Unfinished GameFrom probable improbabilities to regular irregularities, Probabilities: The Little Numbers That Rule Our Lives, Second Edition investigates the often surprising effects of risk and chance in our lives. Featur

  7. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  8. The irrationals a story of the numbers you can't count on

    CERN Document Server

    Havil, Julian

    2012-01-01

    The ancient Greeks discovered them, but it wasn’t until the nineteenth century that irrational numbers were properly understood and rigorously defined, and even today not all their mysteries have been revealed. In The Irrationals, the first popular and comprehensive book on the subject, Julian Havil tells the story of irrational numbers and the mathematicians who have tackled their challenges, from antiquity to the twenty-first century. Along the way, he explains why irrational numbers are surprisingly difficult to define—and why so many questions still surround them. Fascinating and illuminating, this is a book for everyone who loves math and the history behind it.

  9. Colleges Leverage Large Endowments to Benefit Some Donors and Employees

    Science.gov (United States)

    Hermes, J. J.

    2008-01-01

    College endowments have beaten the market so consistently in recent years, it is not surprising that individuals would like to take advantage of that institutional wisdom to invest their own money. Increasingly, many are. A small but growing number of universities are trying to entice donors to invest their trusts alongside college endowments,…

  10. The Impact of a Surprise Dividend Increase on a Stocks Performance : the Analysis of Companies Listed on the Warsaw Stock Exchange

    Directory of Open Access Journals (Sweden)

    Tomasz Słoński

    2012-01-01

    Full Text Available The reaction of marginal investors to the announcement of a surprise dividend increase has been measured. Although field research is performed on companies listed on the Warsaw Stock Exchange, the paper has important theoretical implications. Valuation theory gives many clues for the interpretation of changes in dividends. At the start of the literature review, the assumption of the irrelevance of dividends (to investment decisions is described. This assumption is the basis for up-to-date valuation procedures leading to fundamental and fair market valuation of equity (shares. The paper is designed to verify whether the market value of stock is immune to the surprise announcement of a dividend increase. This study of the effect of a surprise dividend increase gives the chance to partially isolate such an event from dividend changes based on long-term expectations. The result of the research explicitly shows that a surprise dividend increase is on average welcomed by investors (an average abnormal return of 2.24% with an associated p-value of 0.001. Abnormal returns are realized by investors when there is a surprise increase in a dividend payout. The subsample of relatively high increases in a dividend payout enables investors to gain a 3.2% return on average. The results show that valuation models should be revised to take into account a possible impact of dividend changes on investors behavior. (original abstract

  11. Dealing with unexpected events on the flight deck : A conceptual model of startle and surprise

    NARCIS (Netherlands)

    Landman, H.M.; Groen, E.L.; Paassen, M.M. van; Bronkhorst, A.W.; Mulder, M.

    2017-01-01

    Objective: A conceptual model is proposed in order to explain pilot performance in surprising and startling situations. Background: Today’s debate around loss of control following in-flight events and the implementation of upset prevention and recovery training has highlighted the importance of

  12. Symmetry numbers for rigid, flexible, and fluxional molecules: theory and applications.

    Science.gov (United States)

    Gilson, Michael K; Irikura, Karl K

    2010-12-16

    The use of molecular simulations and ab initio calculations to predict thermodynamic properties of molecules has become routine. Such methods rely upon an accurate representation of the molecular partition function or configurational integral, which in turn often includes a rotational symmetry number. However, the reason for including the symmetry number is unclear to many practitioners, and there is also a need for a general prescription for evaluating the symmetry numbers of flexible molecules, i.e., for molecules with thermally active internal degrees of freedom, such as internal rotors. Surprisingly, we have been unable to find any complete and convincing explanations of these important issues in textbooks or the journal literature. The present paper aims to explain why symmetry numbers are needed and how their values should be determined. Both classical and quantum approaches are provided.

  13. Semantic relation vs. surprise: the differential effects of related and unrelated co-verbal gestures on neural encoding and subsequent recognition.

    Science.gov (United States)

    Straube, Benjamin; Meyer, Lea; Green, Antonia; Kircher, Tilo

    2014-06-03

    Speech-associated gesturing leads to memory advantages for spoken sentences. However, unexpected or surprising events are also likely to be remembered. With this study we test the hypothesis that different neural mechanisms (semantic elaboration and surprise) lead to memory advantages for iconic and unrelated gestures. During fMRI-data acquisition participants were presented with video clips of an actor verbalising concrete sentences accompanied by iconic gestures (IG; e.g., circular gesture; sentence: "The man is sitting at the round table"), unrelated free gestures (FG; e.g., unrelated up down movements; same sentence) and no gestures (NG; same sentence). After scanning, recognition performance for the three conditions was tested. Videos were evaluated regarding semantic relation and surprise by a different group of participants. The semantic relationship between speech and gesture was rated higher for IG (IG>FG), whereas surprise was rated higher for FG (FG>IG). Activation of the hippocampus correlated with subsequent memory performance of both gesture conditions (IG+FG>NG). For the IG condition we found activation in the left temporal pole and middle cingulate cortex (MCC; IG>FG). In contrast, for the FG condition posterior thalamic structures (FG>IG) as well as anterior and posterior cingulate cortices were activated (FG>NG). Our behavioral and fMRI-data suggest different mechanisms for processing related and unrelated co-verbal gestures, both of them leading to enhanced memory performance. Whereas activation in MCC and left temporal pole for iconic co-verbal gestures may reflect semantic memory processes, memory enhancement for unrelated gestures relies on the surprise response, mediated by anterior/posterior cingulate cortex and thalamico-hippocampal structures. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  15. Hepatobiliary fascioliasis in non-endemic zones: a surprise diagnosis.

    Science.gov (United States)

    Jha, Ashish Kumar; Goenka, Mahesh Kumar; Goenka, Usha; Chakrabarti, Amrita

    2013-03-01

    Fascioliasis is a zoonotic infection caused by Fasciola hepatica. Because of population migration and international food trade, human fascioliasis is being an increasingly recognised entity in nonendemic zones. In most parts of Asia, hepatobiliary fascioliasis is sporadic. Human hepatobiliary infection by this trematode has two distinct phases: an acute hepatic phase and a chronic biliary phase. Hepatobiliary infection is mostly associated with intense peripheral eosinophilia. In addition to classically defined hepatic phase and biliary phase fascioliasis, some cases may have an overlap of these two phases. Chronic liver abscess formation is a rare presentation. We describe a surprise case of hepatobiliary fascioliasis who presented to us with liver abscess without intense peripheral eosinophilia, a rare presentation of human fascioliasis especially in non-endemic zones. Copyright © 2013 Arab Journal of Gastroenterology. Published by Elsevier Ltd. All rights reserved.

  16. Large complex ovarian cyst managed by laparoscopy

    OpenAIRE

    Dipak J. Limbachiya; Ankit Chaudhari; Grishma P. Agrawal

    2017-01-01

    Complex ovarian cyst with secondary infection is a rare disease that hardly responds to the usual antibiotic treatment. Most of the times, it hampers day to day activities of women. It is commonly known to cause pain and fever. To our surprise, in our case the cyst was large enough to compress the ureter and it was adherent to the surrounding structures. Laparoscopic removal of the cyst was done and specimen was sent for histopathological examination.

  17. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  18. Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

    Directory of Open Access Journals (Sweden)

    Huilin Huang

    2014-01-01

    Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

  19. Large eddy simulation study of the kinetic energy entrainment by energetic turbulent flow structures in large wind farms

    Science.gov (United States)

    VerHulst, Claire; Meneveau, Charles

    2014-02-01

    In this study, we address the question of how kinetic energy is entrained into large wind turbine arrays and, in particular, how large-scale flow structures contribute to such entrainment. Previous research has shown this entrainment to be an important limiting factor in the performance of very large arrays where the flow becomes fully developed and there is a balance between the forcing of the atmospheric boundary layer and the resistance of the wind turbines. Given the high Reynolds numbers and domain sizes on the order of kilometers, we rely on wall-modeled large eddy simulation (LES) to simulate turbulent flow within the wind farm. Three-dimensional proper orthogonal decomposition (POD) analysis is then used to identify the most energetic flow structures present in the LES data. We quantify the contribution of each POD mode to the kinetic energy entrainment and its dependence on the layout of the wind turbine array. The primary large-scale structures are found to be streamwise, counter-rotating vortices located above the height of the wind turbines. While the flow is periodic, the geometry is not invariant to all horizontal translations due to the presence of the wind turbines and thus POD modes need not be Fourier modes. Differences of the obtained modes with Fourier modes are documented. Some of the modes are responsible for a large fraction of the kinetic energy flux to the wind turbine region. Surprisingly, more flow structures (POD modes) are needed to capture at least 40% of the turbulent kinetic energy, for which the POD analysis is optimal, than are needed to capture at least 40% of the kinetic energy flux to the turbines. For comparison, we consider the cases of aligned and staggered wind turbine arrays in a neutral atmospheric boundary layer as well as a reference case without wind turbines. While the general characteristics of the flow structures are robust, the net kinetic energy entrainment to the turbines depends on the presence and relative

  20. First passage times in homogeneous nucleation: Dependence on the total number of particles

    International Nuclear Information System (INIS)

    Yvinec, Romain; Bernard, Samuel; Pujo-Menjouet, Laurent; Hingant, Erwan

    2016-01-01

    Motivated by nucleation and molecular aggregation in physical, chemical, and biological settings, we present an extension to a thorough analysis of the stochastic self-assembly of a fixed number of identical particles in a finite volume. We study the statistics of times required for maximal clusters to be completed, starting from a pure-monomeric particle configuration. For finite volumes, we extend previous analytical approaches to the case of arbitrary size-dependent aggregation and fragmentation kinetic rates. For larger volumes, we develop a scaling framework to study the first assembly time behavior as a function of the total quantity of particles. We find that the mean time to first completion of a maximum-sized cluster may have a surprisingly weak dependence on the total number of particles. We highlight how higher statistics (variance, distribution) of the first passage time may nevertheless help to infer key parameters, such as the size of the maximum cluster. Finally, we present a framework to quantify formation of macroscopic sized clusters, which are (asymptotically) very unlikely and occur as a large deviation phenomenon from the mean-field limit. We argue that this framework is suitable to describe phase transition phenomena, as inherent infrequent stochastic processes, in contrast to classical nucleation theory

  1. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  2. A NICE approach to managing large numbers of desktop PC's

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)

  3. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  4. Team play with a powerful and independent agent: operational experiences and automation surprises on the Airbus A-320

    Science.gov (United States)

    Sarter, N. B.; Woods, D. D.

    1997-01-01

    Research and operational experience have shown that one of the major problems with pilot-automation interaction is a lack of mode awareness (i.e., the current and future status and behavior of the automation). As a result, pilots sometimes experience so-called automation surprises when the automation takes an unexpected action or fails to behave as anticipated. A lack of mode awareness and automation surprises can he viewed as symptoms of a mismatch between human and machine properties and capabilities. Changes in automation design can therefore he expected to affect the likelihood and nature of problems encountered by pilots. Previous studies have focused exclusively on early generation "glass cockpit" aircraft that were designed based on a similar automation philosophy. To find out whether similar difficulties with maintaining mode awareness are encountered on more advanced aircraft, a corpus of automation surprises was gathered from pilots of the Airbus A-320, an aircraft characterized by high levels of autonomy, authority, and complexity. To understand the underlying reasons for reported breakdowns in human-automation coordination, we also asked pilots about their monitoring strategies and their experiences with and attitude toward the unique design of flight controls on this aircraft.

  5. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  6. Surprisal analysis of Glioblastoma Multiform (GBM) microRNA dynamics unveils tumor specific phenotype.

    Science.gov (United States)

    Zadran, Sohila; Remacle, Francoise; Levine, Raphael

    2014-01-01

    Gliomablastoma multiform (GBM) is the most fatal form of all brain cancers in humans. Currently there are limited diagnostic tools for GBM detection. Here, we applied surprisal analysis, a theory grounded in thermodynamics, to unveil how biomolecule energetics, specifically a redistribution of free energy amongst microRNAs (miRNAs), results in a system deviating from a non-cancer state to the GBM cancer -specific phenotypic state. Utilizing global miRNA microarray expression data of normal and GBM patients tumors, surprisal analysis characterizes a miRNA system response capable of distinguishing GBM samples from normal tissue biopsy samples. We indicate that the miRNAs contributing to this system behavior is a disease phenotypic state specific to GBM and is therefore a unique GBM-specific thermodynamic signature. MiRNAs implicated in the regulation of stochastic signaling processes crucial in the hallmarks of human cancer, dominate this GBM-cancer phenotypic state. With this theory, we were able to distinguish with high fidelity GBM patients solely by monitoring the dynamics of miRNAs present in patients' biopsy samples. We anticipate that the GBM-specific thermodynamic signature will provide a critical translational tool in better characterizing cancer types and in the development of future therapeutics for GBM.

  7. Surprise, Memory, and Retrospective Judgment Making: Testing Cognitive Reconstruction Theories of the Hindsight Bias Effect

    Science.gov (United States)

    Ash, Ivan K.

    2009-01-01

    Hindsight bias has been shown to be a pervasive and potentially harmful decision-making bias. A review of 4 competing cognitive reconstruction theories of hindsight bias revealed conflicting predictions about the role and effect of expectation or surprise in retrospective judgment formation. Two experiments tested these predictions examining the…

  8. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  9. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  10. Those fascinating numbers

    CERN Document Server

    Koninck, Jean-Marie De

    2009-01-01

    Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n

  11. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  12. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  13. Physics Nobel prize 2004: Surprising theory wins physics Nobel

    CERN Multimedia

    2004-01-01

    From left to right: David Politzer, David Gross and Frank Wilczek. For their understanding of counter-intuitive aspects of the strong force, which governs quarks inside protons and neutrons, on 5 October three American physicists were awarded the 2004 Nobel Prize in Physics. David J. Gross (Kavli Institute of Theoretical Physics, University of California, Santa Barbara), H. David Politzer (California Institute of Technology), and Frank Wilczek (Massachusetts Institute of Technology) made a key theoretical discovery with a surprising result: the closer quarks are together, the weaker the force - opposite to what is seen with electromagnetism and gravity. Rather, the strong force is analogous to a rubber band stretching, where the force increases as the quarks get farther apart. These physicists discovered this property of quarks, known as asymptotic freedom, in 1976. It later became a key part of the theory of quantum chromodynamics (QCD) and the Standard Model, the current best theory to describe the interac...

  14. Recreations in the theory of numbers the queen of mathematics entertains

    CERN Document Server

    Beiler, Albert H

    1966-01-01

    Number theory, the Queen of Mathematics, is an almost purely theoretical science. Yet it can be the source of endlessly intriguing puzzle problems, as this remarkable book demonstrates. This is the first book to deal exclusively with the recreational aspects of the subject and it is certain to be a delightful surprise to all devotees of the mathematical puzzle, from the rawest beginner to the most practiced expert. Almost every aspect of the theory of numbers that could conceivably be of interest to the layman is dealt with, all from the recreational point of view. Readers will become acquainted with divisors, perfect numbers, the ingenious invention of congruences by Gauss, scales of notation, endless decimals, Pythagorean triangles (there is a list of the first 100 with consecutive legs; the 100th has a leg of 77 digits), oddities about squares, methods of factoring, mysteries of prime numbers, Gauss's Golden Theorem, polygonal and pyramidal numbers, the Pell Equation, the unsolved Last Theorem of Fermat, a...

  15. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  16. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  17. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until

  18. Surprising judgments about robot drivers: Experiments on rising expectations and blaming humans

    Directory of Open Access Journals (Sweden)

    Peter Danielson

    2015-05-01

    Full Text Available N-Reasons is an experimental Internet survey platform designed to enhance public participation in applied ethics and policy. N-Reasons encourages individuals to generate reasons to support their judgments, and groups to converge on a common set of reasons pro and con various issues.  In the Robot Ethics Survey some of the reasons contributed surprising judgments about autonomous machines. Presented with a version of the trolley problem with an autonomous train as the agent, participants gave unexpected answers, revealing high expectations for the autonomous machine and shifting blame from the automated device to the humans in the scenario. Further experiments with a standard pair of human-only trolley problems refine these results. While showing the high expectations even when no autonomous machine is involved, human bystanders are only blamed in the machine case. A third experiment explicitly aimed at responsibility for driverless cars confirms our findings about shifting blame in the case of autonomous machine agents. We conclude methodologically that both results point to the power of an experimental survey based approach to public participation to explore surprising assumptions and judgments in applied ethics. However, both results also support using caution when interpreting survey results in ethics, demonstrating the importance of qualitative data to provide further context for evaluating judgments revealed by surveys. On the ethics side, the result about shifting blame to humans interacting with autonomous machines suggests caution about the unintended consequences of intuitive principles requiring human responsibility.http://dx.doi.org/10.5324/eip.v9i1.1727

  19. Surprises from a Deep ASCA Spectrum of the Broad Absorption Line Quasar PHL 5200

    Science.gov (United States)

    Mathur, Smita; Matt, G.; Green, P. J.; Elvis, M.; Singh, K. P.

    2002-01-01

    We present a deep (approx. 85 ks) ASCA observation of the prototype broad absorption line quasar (BALQSO) PHL 5200. This is the best X-ray spectrum of a BALQSO yet. We find the following: (1) The source is not intrinsically X-ray weak. (2) The line-of-sight absorption is very strong, with N(sub H) = 5 x 10(exp 23)/sq cm. (3) The absorber does not cover the source completely; the covering fraction is approx. 90%. This is consistent with the large optical polarization observed in this source, implying multiple lines of sight. The most surprising result of this observation is that (4) the spectrum of this BALQSO is not exactly similar to other radio-quiet quasars. The hard X-ray spectrum of PHL 5200 is steep, with the power-law spectral index alpha approx. 1.5. This is similar to the steepest hard X-ray slopes observed so far. At low redshifts, such steep slopes are observed in narrow-line Seyfert 1 (NLS1) galaxies, believed to be accreting at a high Eddington rate. This observation strengthens the analogy between BALQSOs and NLS1 galaxies and supports the hypothesis that BALQSOs represent an early evolutionary state of quasars. It is well accepted that the orientation to the line of sight determines the appearance of a quasar: age seems to play a significant role as well.

  20. Arctic skate Amblyraja hyperborea preys on remarkably large glacial eelpouts Lycodes frigidus.

    Science.gov (United States)

    Byrkjedal, I; Christiansen, J S; Karamushko, O V; Langhelle, G; Lynghammar, A

    2015-01-01

    During scientific surveys on the continental slopes north-west of Spitsbergen and off north-east Greenland (c. 600 and 1000 m depths), two female Arctic skates Amblyraja hyperborea were caught while swallowing extraordinary large individuals of glacial eelpout Lycodes frigidus. The total length (LT) of the prey constituted 50 and 80% of the LT of the skates, which reveal that A. hyperborea are capable predators of fishes of surprisingly large relative size. © 2014 The Fisheries Society of the British Isles.

  1. On the chromatic number of triangle-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2002-01-01

    We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c

  2. Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL

    Science.gov (United States)

    Port, Dan; Nikora, Allen; Hihn, Jairus; Huang, LiGuo

    2011-01-01

    Often repositories of systems engineering artifacts at NASA's Jet Propulsion Laboratory (JPL) are so large and poorly structured that they have outgrown our capability to effectively manually process their contents to extract useful information. Sophisticated text mining methods and tools seem a quick, low-effort approach to automating our limited manual efforts. Our experiences of exploring such methods mainly in three areas including historical risk analysis, defect identification based on requirements analysis, and over-time analysis of system anomalies at JPL, have shown that obtaining useful results requires substantial unanticipated efforts - from preprocessing the data to transforming the output for practical applications. We have not observed any quick 'wins' or realized benefit from short-term effort avoidance through automation in this area. Surprisingly we have realized a number of unexpected long-term benefits from the process of applying text mining to our repositories. This paper elaborates some of these benefits and our important lessons learned from the process of preparing and applying text mining to large unstructured system artifacts at JPL aiming to benefit future TM applications in similar problem domains and also in hope for being extended to broader areas of applications.

  3. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  4. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  5. ‘Surprise’: Outbreak of Campylobacter infection associated with chicken liver pâté at a surprise birthday party, Adelaide, Australia, 2012

    Directory of Open Access Journals (Sweden)

    Emma Denehy

    2012-11-01

    Full Text Available Objective: In July 2012, an outbreak of Campylobacter infection was investigated by the South Australian Communicable Disease Control Branch and Food Policy and Programs Branch. The initial notification identified illness at a surprise birthday party held at a restaurant on 14 July 2012. The objective of the investigation was to identify the potential source of infection and institute appropriate intervention strategies to prevent further illness.Methods: A guest list was obtained and a retrospective cohort study undertaken. A combination of paper-based and telephone questionnaires were used to collect exposure and outcome information. An environmental investigation was conducted by Food Policy and Programs Branch at the implicated premises.Results: All 57 guests completed the questionnaire (100% response rate, and 15 met the case definition. Analysis showed a significant association between illness and consumption of chicken liver pâté (relative risk: 16.7, 95% confidence interval: 2.4–118.6. No other food or beverage served at the party was associated with illness. Three guests submitted stool samples; all were positive for Campylobacter. The environmental investigation identified that the cooking process used in the preparation of chicken liver pâté may have been inconsistent, resulting in some portions not cooked adequately to inactivate potential Campylobacter contamination.Discussion: Chicken liver products are a known source of Campylobacter infection; therefore, education of food handlers remains a high priority. To better identify outbreaks among the large number of Campylobacter notifications, routine typing of Campylobacter isolates is recommended.

  6. ‘Surprise’: Outbreak of Campylobacter infection associated with chicken liver pâté at a surprise birthday party, Adelaide, Australia, 2012

    Science.gov (United States)

    Fearnley, Emily; Denehy, Emma

    2012-01-01

    Objective In July 2012, an outbreak of Campylobacter infection was investigated by the South Australian Communicable Disease Control Branch and Food Policy and Programs Branch. The initial notification identified illness at a surprise birthday party held at a restaurant on 14 July 2012. The objective of the investigation was to identify the potential source of infection and institute appropriate intervention strategies to prevent further illness. Methods A guest list was obtained and a retrospective cohort study undertaken. A combination of paper-based and telephone questionnaires were used to collect exposure and outcome information. An environmental investigation was conducted by Food Policy and Programs Branch at the implicated premises. Results All 57 guests completed the questionnaire (100% response rate), and 15 met the case definition. Analysis showed a significant association between illness and consumption of chicken liver pâté (relative risk: 16.7, 95% confidence interval: 2.4–118.6). No other food or beverage served at the party was associated with illness. Three guests submitted stool samples; all were positive for Campylobacter. The environmental investigation identified that the cooking process used in the preparation of chicken liver pâté may have been inconsistent, resulting in some portions not cooked adequately to inactivate potential Campylobacter contamination. Discussion Chicken liver products are a known source of Campylobacter infection; therefore, education of food handlers remains a high priority. To better identify outbreaks among the large number of Campylobacter notifications, routine typing of Campylobacter isolates is recommended. PMID:23908933

  7. Analysis of a large number of clinical studies for breast cancer radiotherapy: estimation of radiobiological parameters for treatment planning

    International Nuclear Information System (INIS)

    Guerrero, M; Li, X Allen

    2003-01-01

    Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro

  8. On predicting quantal cross sections by interpolation: Surprisal analysis of j/sub z/CCS and statistical j/sub z/ results

    International Nuclear Information System (INIS)

    Goldflam, R.; Kouri, D.J.

    1976-01-01

    New methods for predicting the full matrix of integral cross sections are developed by combining the surprisal analysis of Bernstein and Levine with the j/sub z/-conserving coupled states method (j/sub z/CCS) of McGuire, Kouri, and Pack and with the statistical j/sub z/ approximation (Sj/sub z/) of Kouri, Shimoni, and Heil. A variety of approaches is possible and only three are studied in the present work. These are (a) a surprisal fit of the j=0→j' column of the j/sub z/CCS cross section matrix (thereby requiring only a solution of the lambda=0 set of j/sub z/CCS equations), (b) a surprisal fit of the lambda-bar=0 Sj/sub z/ cross section matrix (again requiring solution of the lambda=0 set of j/sub z/CCS equations only), and (c) a surprisal fit of a lambda-bar not equal to 0 Sj/sub z/ submatrix (involving input cross sections for j,j'> or =lambda-bar transitions only). The last approach requires the solution of the lambda=lambda-bar set of j/sub z/CCS equations only, which requires less computation effort than the effective potential method. We explore three different choices for the prior and two-parameter (i.e., linear) and three-parameter (i.e., parabolic) fits as applied to Ar--N 2 collisions. The results are in general very encouraging and for one choice of prior give results which are within 20% of the exact j/sub z/CCS results

  9. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    Science.gov (United States)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  10. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  11. Aeolian comminution experiments revealing surprising sandball mineral aggregates

    Science.gov (United States)

    Nørnberg, P.; Bak, E.; Finster, K.; Gunnlaugsson, H. P.; Iversen, J. J.; Jensen, S. Knak; Merrison, J. P.

    2014-06-01

    We have undertaken a set of wind erosion experiments on a simple and well defined mineral, quartz. In these experiments wind action is simulated by end over end tumbling of quartz grains in a sealed quartz flask. The tumbling induces collisions among the quartz grains and the walls of the flask. This process simulates wind action impact speed of ∼1.2 m/s. After several months of tumbling we observed the formation of a large number of spherical sand aggregates, which resemble small snowballs under optical microscopy. Upon mechanical load the aggregates are seen to be more elastic than quartz and their mechanical strength is comparable, though slightly lower than that of sintered silica aerogels. Aggregates of this kind have not been reported from field sites or from closed circulation systems. However, sparse occurrence might explain this, or in nature the concentration of the aggregate building particles is so low that they never meet and just appear as the most fine grained tail of the sediment particle size distribution.

  12. Large-Eddy Simulation of a High Reynolds Number Flow Around a Cylinder Including Aeroacoustic Predictions

    Science.gov (United States)

    Spyropoulos, Evangelos T.; Holmes, Bayard S.

    1997-01-01

    The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.

  13. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  14. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  15. Auxiliary basis expansions for large-scale electronic structure calculations.

    Science.gov (United States)

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  16. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  17. Chaotic scattering: the supersymmetry method for large number of channels

    International Nuclear Information System (INIS)

    Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.

    1995-01-01

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  18. Chaotic scattering: the supersymmetry method for large number of channels

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)

    1995-01-23

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  19. Neutrino number of the universe

    International Nuclear Information System (INIS)

    Kolb, E.W.

    1981-01-01

    The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references

  20. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  1. Growth of equilibrium structures built from a large number of distinct component types.

    Science.gov (United States)

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  2. Prediction, Expectation, and Surprise: Methods, Designs, and Study of a Deployed Traffic Forecasting Service

    OpenAIRE

    Horvitz, Eric J.; Apacible, Johnson; Sarin, Raman; Liao, Lin

    2012-01-01

    We present research on developing models that forecast traffic flow and congestion in the Greater Seattle area. The research has led to the deployment of a service named JamBayes, that is being actively used by over 2,500 users via smartphones and desktop versions of the system. We review the modeling effort and describe experiments probing the predictive accuracy of the models. Finally, we present research on building models that can identify current and future surprises, via efforts on mode...

  3. Collaborative Resilience to Episodic Shocks and Surprises: A Very Long-Term Case Study of Zanjera Irrigation in the Philippines 1979–2010

    Directory of Open Access Journals (Sweden)

    Ruth Yabes

    2015-07-01

    Full Text Available This thirty-year case study uses surveys, semi-structured interviews, and content analysis to examine the adaptive capacity of Zanjera San Marcelino, an indigenous irrigation management system in the northern Philippines. This common pool resource (CPR system exists within a turbulent social-ecological system (SES characterized by episodic shocks such as large typhoons as well as novel surprises, such as national political regime change and the construction of large dams. The Zanjera nimbly responded to these challenges, although sometimes in ways that left its structure and function substantially altered. While a partial integration with the Philippine National Irrigation Agency was critical to the Zanjera’s success, this relationship required on-going improvisation and renegotiation. Over time, the Zanjera showed an increasing capacity to learn and adapt. A core contribution of this analysis is the integration of a CPR study within an SES framework to examine resilience, made possible the occurrence of a wide range of challenges to the Zanjera’s function and survival over the long period of study. Long-term analyses like this one, however rare, are particularly useful for understanding the adaptive and transformative dimensions of resilience.

  4. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Science.gov (United States)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  5. Macroscopic quantum phenomena from the large N perspective

    International Nuclear Information System (INIS)

    Chou, C H; Hu, B L; Subasi, Y

    2011-01-01

    Macroscopic quantum phenomena (MQP) is a relatively new research venue, with exciting ongoing experiments and bright prospects, yet with surprisingly little theoretical activity. What makes MQP intellectually stimulating is because it is counterpoised against the traditional view that macroscopic means classical. This simplistic and hitherto rarely challenged view need be scrutinized anew, perhaps with much of the conventional wisdoms repealed. In this series of papers we report on a systematic investigation into some key foundational issues of MQP, with the hope of constructing a viable theoretical framework for this new endeavour. The three major themes discussed in these three essays are the large N expansion, the correlation hierarchy and quantum entanglement for systems of 'large' sizes, with many components or degrees of freedom. In this paper we use different theories in a variety of contexts to examine the conditions or criteria whereby a macroscopic quantum system may take on classical attributes, and, more interestingly, that it keeps some of its quantum features. The theories we consider here are, the O(N) quantum mechanical model, semiclassical stochastic gravity and gauge / string theories; the contexts include that of a 'quantum roll' in inflationary cosmology, entropy generation in quantum Vlasov equation for plasmas, the leading order and next-to-leading order large N behaviour, and hydrodynamic / thermodynamic limits. The criteria for classicality in our consideration include the use of uncertainty relations, the correlation between classical canonical variables, randomization of quantum phase, environment-induced decoherence, decoherent history of hydrodynamic variables, etc. All this exercise is to ask only one simple question: Is it really so surprising that quantum features can appear in macroscopic objects? By examining different representative systems where detailed theoretical analysis has been carried out, we find that there is no a priori

  6. Smartphone Response System Using Twitter to Enable Effective Interaction and Improve Engagement in Large Classrooms

    Science.gov (United States)

    Kim, Yeongjun; Jeong, Soonmook; Ji, Yongwoon; Lee, Sangeun; Kwon, Key Ho; Jeon, Jae Wook

    2015-01-01

    This paper proposes a method for seamless interaction between students and their professor using Twitter, one of the typical social network service (SNS) platforms, in large lectures. During the lecture, the professor poses surprise questions in the form of a quiz on an overhead screen at unexpected moments, and students submit their answers…

  7. Phenomenology of future neutrino experiments with large θ13

    International Nuclear Information System (INIS)

    Minakata, Hisakazu

    2013-01-01

    The question “how small is the lepton mixing angle θ 13 ?” had a convincing answer in a surprisingly short time, θ 13 ≃9 ° , a large value comparable to the Chooz limit. It defines a new epoch in the program of determining the lepton mixing parameters, opening the door to search for lepton CP violation of the Kobayashi-Maskawa-type. I discuss influences of the large value of θ 13 to search for CP violation and determination of the neutrino mass hierarchy, the remaining unknowns in the standard three-flavor mixing scheme of neutrinos. I emphasize the following two points: (1) Large θ 13 makes determination of the mass hierarchy easier. It stimulates to invent new ideas and necessitates quantitative reexamination of practical ways to explore it. (2) However, large θ 13 does not quite make CP measurement easier so that we do need a “guaranteeing machine” to measure CP phase δ

  8. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  9. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  10. Vascular legacy: HOPE ADVANCEs to EMPA-REG and LEADER: A Surprising similarity

    Directory of Open Access Journals (Sweden)

    Sanjay Kalra

    2017-01-01

    Full Text Available Recently reported cardiovascular outcome studies on empagliflozin (EMPA-REG and liraglutide (LEADER have spurred interest in this field of diabetology. This commentary compares and contrasts these studies with two equally important outcome trials conducted using blood pressure lowering agents. A comparison with MICROHOPE (using ramipril and ADVANCE (using perindopril + indapamide blood pressure arms throws up interesting facts. The degree of blood pressure lowering, dissociation between cardiovascular and cerebrovascular benefits, and discordance between renal and retinal outcomes are surprisingly similar in these trials, conducted using disparate molecules. The time taken to achieve such benefits is similar for all drugs except empagliflozin. Such discussion helps inform rational and evidence-based choice of therapy and forms the framework for future research.

  11. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  12. Equilibrium deuterium isotope effect of surprising magnitude

    International Nuclear Information System (INIS)

    Goldstein, M.J.; Pressman, E.J.

    1981-01-01

    Seemingly large deuterium isotope effects are reported for the preference of deuterium for the α-chloro site to the bridgehead or to the vinyl site in samples of anti-7-chlorobicyclo[4.3.2]undecatetraene-d 1 . Studies of molecular models did not provide a basis for these large equilibrium deuterium isotope effects. The possibility is proposed that these isotope effects only appear to be large for want of comparison with isotope effects measured for molecules that might provide even greater contrasts in local force fields

  13. New approaches to phylogenetic tree search and their application to large numbers of protein alignments.

    Science.gov (United States)

    Whelan, Simon

    2007-10-01

    Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.

  14. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  15. The roles of large top predators in coastal ecosystems: new insights from long term ecological research

    Science.gov (United States)

    Rosenblatt, Adam E.; Heithaus, Michael R.; Mather, Martha E.; Matich, Philip; Nifong, James C.; Ripple, William J.; Silliman, Brian R.

    2013-01-01

    During recent human history, human activities such as overhunting and habitat destruction have severely impacted many large top predator populations around the world. Studies from a variety of ecosystems show that loss or diminishment of top predator populations can have serious consequences for population and community dynamics and ecosystem stability. However, there are relatively few studies of the roles of large top predators in coastal ecosystems, so that we do not yet completely understand what could happen to coastal areas if large top predators are extirpated or significantly reduced in number. This lack of knowledge is surprising given that coastal areas around the globe are highly valued and densely populated by humans, and thus coastal large top predator populations frequently come into conflict with coastal human populations. This paper reviews what is known about the ecological roles of large top predators in coastal systems and presents a synthesis of recent work from three coastal eastern US Long Term Ecological Research (LTER) sites where long-term studies reveal what appear to be common themes relating to the roles of large top predators in coastal systems. We discuss three specific themes: (1) large top predators acting as mobile links between disparate habitats, (2) large top predators potentially affecting nutrient and biogeochemical dynamics through localized behaviors, and (3) individual specialization of large top predator behaviors. We also discuss how research within the LTER network has led to enhanced understanding of the ecological roles of coastal large top predators. Highlighting this work is intended to encourage further investigation of the roles of large top predators across diverse coastal aquatic habitats and to better inform researchers and ecosystem managers about the importance of large top predators for coastal ecosystem health and stability.

  16. The influence of psychological resilience on the relation between automatic stimulus evaluation and attentional breadth for surprised faces.

    Science.gov (United States)

    Grol, Maud; De Raedt, Rudi

    2015-01-01

    The broaden-and-build theory relates positive emotions to resilience and cognitive broadening. The theory proposes that the broadening effects underly the relation between positive emotions and resilience, suggesting that resilient people can benefit more from positive emotions at the level of cognitive functioning. Research has investigated the influence of positive emotions on attentional broadening, but the stimulus in the target of attention may also influence attentional breadth, depending on affective stimulus evaluation. Surprised faces are particularly interesting as they are valence ambiguous, therefore, we investigated the relation between affective evaluation--using an affective priming task--and attentional breadth for surprised faces, and how this relation is influenced by resilience. Results show that more positive evaluations are related to more attentional broadening at high levels of resilience, while this relation is reversed at low levels. This indicates that resilient individuals can benefit more from attending to positively evaluated stimuli at the level of attentional broadening.

  17. Atom Surprise: Using Theatre in Primary Science Education

    Science.gov (United States)

    Peleg, Ran; Baram-Tsabari, Ayelet

    2011-10-01

    Early exposure to science may have a lifelong effect on children's attitudes towards science and their motivation to learn science in later life. Out-of-class environments can play a significant role in creating favourable attitudes, while contributing to conceptual learning. Educational science theatre is one form of an out-of-class environment, which has received little research attention. This study aims to describe affective and cognitive learning outcomes of watching such a play and to point to connections between theatrical elements and specific outcomes. "Atom Surprise" is a play portraying several concepts on the topic of matter. A mixed methods approach was adopted to investigate the knowledge and attitudes of children (grades 1-6) from two different school settings who watched the play. Data were gathered using questionnaires and in-depth interviews. Analysis suggested that in both schools children's knowledge on the topic of matter increased after the play with younger children gaining more conceptual knowledge than their older peers. In the public school girls showed greater gains in conceptual knowledge than boys. No significant changes in students' general attitudes towards science were found, however, students demonstrated positive changes towards science learning. Theatrical elements that seemed to be important in children's recollection of the play were the narrative, props and stage effects, and characters. In the children's memory, science was intertwined with the theatrical elements. Nonetheless, children could distinguish well between scientific facts and the fictive narrative.

  18. Turbulent flows at very large Reynolds numbers: new lessons learned

    International Nuclear Information System (INIS)

    Barenblatt, G I; Prostokishin, V M; Chorin, A J

    2014-01-01

    The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)

  19. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  20. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  1. Large-D gravity and low-D strings.

    Science.gov (United States)

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  2. Surprise: Dwarf Galaxy Harbors Supermassive Black Hole

    Science.gov (United States)

    2011-01-01

    The surprising discovery of a supermassive black hole in a small nearby galaxy has given astronomers a tantalizing look at how black holes and galaxies may have grown in the early history of the Universe. Finding a black hole a million times more massive than the Sun in a star-forming dwarf galaxy is a strong indication that supermassive black holes formed before the buildup of galaxies, the astronomers said. The galaxy, called Henize 2-10, 30 million light-years from Earth, has been studied for years, and is forming stars very rapidly. Irregularly shaped and about 3,000 light-years across (compared to 100,000 for our own Milky Way), it resembles what scientists think were some of the first galaxies to form in the early Universe. "This galaxy gives us important clues about a very early phase of galaxy evolution that has not been observed before," said Amy Reines, a Ph.D. candidate at the University of Virginia. Supermassive black holes lie at the cores of all "full-sized" galaxies. In the nearby Universe, there is a direct relationship -- a constant ratio -- between the masses of the black holes and that of the central "bulges" of the galaxies, leading them to conclude that the black holes and bulges affected each others' growth. Two years ago, an international team of astronomers found that black holes in young galaxies in the early Universe were more massive than this ratio would indicate. This, they said, was strong evidence that black holes developed before their surrounding galaxies. "Now, we have found a dwarf galaxy with no bulge at all, yet it has a supermassive black hole. This greatly strengthens the case for the black holes developing first, before the galaxy's bulge is formed," Reines said. Reines, along with Gregory Sivakoff and Kelsey Johnson of the University of Virginia and the National Radio Astronomy Observatory (NRAO), and Crystal Brogan of the NRAO, observed Henize 2-10 with the National Science Foundation's Very Large Array radio telescope and

  3. Large-Angle CMB Suppression and Polarisation Predictions

    CERN Document Server

    Copi, C.J.; Schwarz, D.J.; Starkman, G.D.

    2013-01-01

    The anomalous lack of large angle temperature correlations has been a surprising feature of the CMB since first observed by COBE-DMR and subsequently confirmed and strengthened by WMAP. This anomaly may point to the need for modifications of the standard model of cosmology or may show that our Universe is a rare statistical fluctuation within that model. Further observations of the temperature auto-correlation function will not elucidate the issue; sufficiently high precision statistical observations already exist. Instead, alternative probes are required. In this work we explore the expectations for forthcoming polarisation observations. We define a prescription to test the hypothesis that the large-angle CMB temperature perturbations in our Universe represent a rare statistical fluctuation within the standard cosmological model. These tests are based on the temperature-Q Stokes parameter correlation. Unfortunately these tests cannot be expected to be definitive. However, we do show that if this TQ-correlati...

  4. Importance of regional species pools and functional traits in colonization processes: predicting re-colonization after large-scale destruction of ecosystems

    NARCIS (Netherlands)

    Kirmer, A.; Tischew, S.; Ozinga, W.A.; Lampe, von M.; Baasch, A.; Groenendael, van J.M.

    2008-01-01

    Large-scale destruction of ecosystems caused by surface mining provides an opportunity for the study of colonization processes starting with primary succession. Surprisingly, over several decades and without any restoration measures, most of these sites spontaneously developed into valuable biotope

  5. Interaction between numbers and size during visual search

    OpenAIRE

    Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver

    2016-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numeric...

  6. A 32-bit NMOS microprocessor with a large register file

    Science.gov (United States)

    Sherburne, R. W., Jr.; Katevenis, M. G. H.; Patterson, D. A.; Sequin, C. H.

    1984-10-01

    Two scaled versions of a 32-bit NMOS reduced instruction set computer CPU, called RISC II, have been implemented on two different processing lines using the simple Mead and Conway layout rules with lambda values of 2 and 1.5 microns (corresponding to drawn gate lengths of 4 and 3 microns), respectively. The design utilizes a small set of simple instructions in conjunction with a large register file in order to provide high performance. This approach has resulted in two surprisingly powerful single-chip processors.

  7. From Lithium-Ion to Sodium-Ion Batteries: Advantages, Challenges, and Surprises.

    Science.gov (United States)

    Nayak, Prasant Kumar; Yang, Liangtao; Brehm, Wolfgang; Adelhelm, Philipp

    2018-01-02

    Mobile and stationary energy storage by rechargeable batteries is a topic of broad societal and economical relevance. Lithium-ion battery (LIB) technology is at the forefront of the development, but a massively growing market will likely put severe pressure on resources and supply chains. Recently, sodium-ion batteries (SIBs) have been reconsidered with the aim of providing a lower-cost alternative that is less susceptible to resource and supply risks. On paper, the replacement of lithium by sodium in a battery seems straightforward at first, but unpredictable surprises are often found in practice. What happens when replacing lithium by sodium in electrode reactions? This review provides a state-of-the art overview on the redox behavior of materials when used as electrodes in lithium-ion and sodium-ion batteries, respectively. Advantages and challenges related to the use of sodium instead of lithium are discussed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  9. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  10. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  11. Old Star's "Rebirth" Gives Astronomers Surprises

    Science.gov (United States)

    2005-04-01

    Astronomers using the National Science Foundation's Very Large Array (VLA) radio telescope are taking advantage of a once-in-a-lifetime opportunity to watch an old star suddenly stir back into new activity after coming to the end of its normal life. Their surprising results have forced them to change their ideas of how such an old, white dwarf star can re-ignite its nuclear furnace for one final blast of energy. Sakurai's Object Radio/Optical Images of Sakurai's Object: Color image shows nebula ejected thousands of years ago. Contours indicate radio emission. Inset is Hubble Space Telescope image, with contours indicating radio emission; this inset shows just the central part of the region. CREDIT: Hajduk et al., NRAO/AUI/NSF, ESO, StSci, NASA Computer simulations had predicted a series of events that would follow such a re-ignition of fusion reactions, but the star didn't follow the script -- events moved 100 times more quickly than the simulations predicted. "We've now produced a new theoretical model of how this process works, and the VLA observations have provided the first evidence supporting our new model," said Albert Zijlstra, of the University of Manchester in the United Kingdom. Zijlstra and his colleagues presented their findings in the April 8 issue of the journal Science. The astronomers studied a star known as V4334 Sgr, in the constellation Sagittarius. It is better known as "Sakurai's Object," after Japanese amateur astronomer Yukio Sakurai, who discovered it on February 20, 1996, when it suddenly burst into new brightness. At first, astronomers thought the outburst was a common nova explosion, but further study showed that Sakurai's Object was anything but common. The star is an old white dwarf that had run out of hydrogen fuel for nuclear fusion reactions in its core. Astronomers believe that some such stars can undergo a final burst of fusion in a shell of helium that surrounds a core of heavier nuclei such as carbon and oxygen. However, the

  12. A large, benign prostatic cyst presented with an extremely high serum prostate-specific antigen level.

    Science.gov (United States)

    Chen, Han-Kuang; Pemberton, Richard

    2016-01-08

    We report a case of a patient who presented with an extremely high serum prostate specific antigen (PSA) level and underwent radical prostatectomy for presumed prostate cancer. Surprisingly, the whole mount prostatectomy specimen showed only small volume, organ-confined prostate adenocarcinoma and a large, benign intraprostatic cyst, which was thought to be responsible for the PSA elevation. 2016 BMJ Publishing Group Ltd.

  13. Fluctuations of nuclear cross sections in the region of strong overlapping resonances and at large number of open channels

    International Nuclear Information System (INIS)

    Kun, S.Yu.

    1985-01-01

    On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed

  14. Low-Reynolds Number Effects in Ventilated Rooms

    DEFF Research Database (Denmark)

    Davidson, Lars; Nielsen, Peter V.; Topp, Claus

    In the present study, we use Large Eddy Simulations (LES) which is a suitable method for simulating the flow in ventilated rooms at low Reynolds number.......In the present study, we use Large Eddy Simulations (LES) which is a suitable method for simulating the flow in ventilated rooms at low Reynolds number....

  15. The Surprising Impact of Seat Location on Student Performance

    Science.gov (United States)

    Perkins, Katherine K.; Wieman, Carl E.

    2005-01-01

    Every physics instructor knows that the most engaged and successful students tend to sit at the front of the class and the weakest students tend to sit at the back. However, it is normally assumed that this is merely an indication of the respective seat location preferences of weaker and stronger students. Here we present evidence suggesting that in fact this may be mixing up the cause and effect. It may be that the seat selection itself contributes to whether the student does well or poorly, rather than the other way around. While a number of studies have looked at the effect of seat location on students, the results are often inconclusive, and few, if any, have studied the effects in college classrooms with randomly assigned seats. In this paper, we report on our observations of a large introductory physics course in which we randomly assigned students to particular seat locations at the beginning of the semester. Seat location during the first half of the semester had a noticeable impact on student success in the course, particularly in the top and bottom parts of the grade distribution. Students sitting in the back of the room for the first half of the term were nearly six times as likely to receive an F as students who started in the front of the room. A corresponding but less dramatic reversal was evident in the fractions of students receiving As. These effects were in spite of many unusual efforts to engage students at the back of the class and a front-to-back reversal of seat location halfway through the term. These results suggest there may be inherent detrimental effects of large physics lecture halls that need to be further explored.

  16. The Most Distant Mature Galaxy Cluster - Young, but surprisingly grown-up

    Science.gov (United States)

    2011-03-01

    Astronomers have used an armada of telescopes on the ground and in space, including the Very Large Telescope at ESO's Paranal Observatory in Chile to discover and measure the distance to the most remote mature cluster of galaxies yet found. Although this cluster is seen when the Universe was less than one quarter of its current age it looks surprisingly similar to galaxy clusters in the current Universe. "We have measured the distance to the most distant mature cluster of galaxies ever found", says the lead author of the study in which the observations from ESO's VLT have been used, Raphael Gobat (CEA, Paris). "The surprising thing is that when we look closely at this galaxy cluster it doesn't look young - many of the galaxies have settled down and don't resemble the usual star-forming galaxies seen in the early Universe." Clusters of galaxies are the largest structures in the Universe that are held together by gravity. Astronomers expect these clusters to grow through time and hence that massive clusters would be rare in the early Universe. Although even more distant clusters have been seen, they appear to be young clusters in the process of formation and are not settled mature systems. The international team of astronomers used the powerful VIMOS and FORS2 instruments on ESO's Very Large Telescope (VLT) to measure the distances to some of the blobs in a curious patch of very faint red objects first observed with the Spitzer space telescope. This grouping, named CL J1449+0856 [1], had all the hallmarks of being a very remote cluster of galaxies [2]. The results showed that we are indeed seeing a galaxy cluster as it was when the Universe was about three billion years old - less than one quarter of its current age [3]. Once the team knew the distance to this very rare object they looked carefully at the component galaxies using both the NASA/ESA Hubble Space Telescope and ground-based telescopes, including the VLT. They found evidence suggesting that most of the

  17. How much can the number of jabiru stork (Ciconiidae nests vary due to change of flood extension in a large Neotropical floodplain?

    Directory of Open Access Journals (Sweden)

    Guilherme Mourão

    2010-10-01

    Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.

  18. Number to finger mapping is topological.

    NARCIS (Netherlands)

    Plaisier, M.A.; Smeets, J.B.J.

    2011-01-01

    It has been shown that humans associate fingers with numbers because finger counting strategies interact with numerical judgements. At the same time, there is evidence that there is a relation between number magnitude and space as small to large numbers seem to be represented from left to right. In

  19. Asymptotic numbers: Pt.1

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1980-01-01

    The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed

  20. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  1. Aerodynamic Effects of Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall

  2. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    Science.gov (United States)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  3. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  4. Enhancement of phase space density by increasing trap anisotropy in a magneto-optical trap with a large number of atoms

    International Nuclear Information System (INIS)

    Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.

    2004-01-01

    The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms

  5. Explaining the large numbers by a hierarchy of ''universes'': a unified theory of strong and gravitational interactions

    International Nuclear Information System (INIS)

    Caldirola, P.; Recami, E.

    1978-01-01

    By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)

  6. Number-unconstrained quantum sensing

    Science.gov (United States)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  7. Visuospatial Priming of the Mental Number Line

    Science.gov (United States)

    Stoianov, Ivilin; Kramer, Peter; Umilta, Carlo; Zorzi, Marco

    2008-01-01

    It has been argued that numbers are spatially organized along a "mental number line" that facilitates left-hand responses to small numbers, and right-hand responses to large numbers. We hypothesized that whenever the representations of visual and numerical space are concurrently activated, interactions can occur between them, before response…

  8. Production of large number of water-cooled excitation coils with improved techniques for multipole magnets of INDUS -2

    International Nuclear Information System (INIS)

    Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.

    2003-01-01

    Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)

  9. CrossRef Large numbers of cold positronium atoms created in laser-selected Rydberg states using resonant charge exchange

    CERN Document Server

    McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M

    2016-01-01

    Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.

  10. The influence of the surprising decay properties of element 108 on search experiments for new elements

    International Nuclear Information System (INIS)

    Hofmann, S.; Armbruster, P.; Muenzenberg, G.; Reisdorf, W.; Schmidt, K.H.; Burkhard, H.G.; Hessberger, F.P.; Schoett, H.J.; Agarwal, Y.K.; Berthes, G.; Gollerthan, U.; Folger, H.; Hingmann, J.G.; Keller, J.G.; Leino, M.E.; Lemmertz, P.; Montoya, M.; Poppensieker, K.; Quint, B.; Zychor, I.

    1986-01-01

    Results of experiments to synthesize the heaviest elements are reported. Surprising is the high stability against fission not only of the odd and odd-odd nuclei but also of even isotopes of even elements. Alpha decay data gave an increasing stability of nuclei by shell effects up to 266 109, the heaviest known element. Theoretically, the high stability is explained by an island of nuclei with big quadrupole and hexadecapole deformations around Z=109 and N=162. Future experiments will be planned to prove the island character of these heavy nuclei. (orig.)

  11. Experimental results surprise quantum theory

    International Nuclear Information System (INIS)

    White, C.

    1986-01-01

    Interest in results from Darmstadt that positron-electron pairs are created in nuclei with high atomic numbers (in the Z range from 180-188) lies in the occurrence of a quantized positron kinetic energy peak at 300. The results lend substance to the contention of Erich Bagge that the traditionally accepted symmetries in positron-electron emission do not exist and, therefore, there is no need to posit the existence of the neutrino. The search is on for the decay of a previously unknown boson to account for the findings, which also points to the need for a major revision in quantum theory. 1 figure

  12. Hyperreal Numbers for Infinite Divergent Series

    OpenAIRE

    Bartlett, Jonathan

    2018-01-01

    Treating divergent series properly has been an ongoing issue in mathematics. However, many of the problems in divergent series stem from the fact that divergent series were discovered prior to having a number system which could handle them. The infinities that resulted from divergent series led to contradictions within the real number system, but these contradictions are largely alleviated with the hyperreal number system. Hyperreal numbers provide a framework for dealing with divergent serie...

  13. Using lanthanoid complexes to phase large macromolecular assemblies

    International Nuclear Information System (INIS)

    Talon, Romain; Kahn, Richard; Durá, M. Asunción; Maury, Olivier; Vellieux, Frédéric M. D.; Franzetti, Bruno; Girard, Eric

    2011-01-01

    A lanthanoid complex, [Eu(DPA) 3 ] 3− , was used to obtain experimental phases at 4.0 Å resolution of PhTET1-12s, a large self-compartmentalized homo-dodecameric protease complex of 444 kDa. Lanthanoid ions exhibit extremely large anomalous X-ray scattering at their L III absorption edge. They are thus well suited for anomalous diffraction experiments. A novel class of lanthanoid complexes has been developed that combines the physical properties of lanthanoid atoms with functional chemical groups that allow non-covalent binding to proteins. Two structures of large multimeric proteins have already been determined by using such complexes. Here the use of the luminescent europium tris-dipicolinate complex [Eu(DPA) 3 ] 3− to solve the low-resolution structure of a 444 kDa homododecameric aminopeptidase, called PhTET1-12s from the archaea Pyrococcus horikoshii, is reported. Surprisingly, considering the low resolution of the data, the experimental electron density map is very well defined. Experimental phases obtained by using the lanthanoid complex lead to maps displaying particular structural features usually observed in higher-resolution maps. Such complexes open a new way for solving the structure of large molecular assemblies, even with low-resolution data

  14. Universal power-law diet partitioning by marine fish and squid with surprising stability–diversity implications

    Science.gov (United States)

    Rossberg, Axel G.; Farnsworth, Keith D.; Satoh, Keisuke; Pinnegar, John K.

    2011-01-01

    A central question in community ecology is how the number of trophic links relates to community species richness. For simple dynamical food-web models, link density (the ratio of links to species) is bounded from above as the number of species increases; but empirical data suggest that it increases without bounds. We found a new empirical upper bound on link density in large marine communities with emphasis on fish and squid, using novel methods that avoid known sources of bias in traditional approaches. Bounds are expressed in terms of the diet-partitioning function (DPF): the average number of resources contributing more than a fraction f to a consumer's diet, as a function of f. All observed DPF follow a functional form closely related to a power law, with power-law exponents independent of species richness at the measurement accuracy. Results imply universal upper bounds on link density across the oceans. However, the inherently scale-free nature of power-law diet partitioning suggests that the DPF itself is a better defined characterization of network structure than link density. PMID:21068048

  15. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    Science.gov (United States)

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of

  16. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  17. Surprises from the resolution of operator mixing in N=4 SYM

    International Nuclear Information System (INIS)

    Bianchi, Massimo; Rossi, Giancarlo; Stanev, Yassen S.

    2004-01-01

    We reexamine the problem of operator mixing in N=4 SYM. Particular attention is paid to the correct definition of composite gauge invariant local operators, which is necessary for the computation of their anomalous dimensions beyond lowest order. As an application we reconsider the case of operators with naive dimension Δ 0 =4, already studied in the literature. Stringent constraints from the resummation of logarithms in power behaviours are exploited and the role of the generalized N=4 Konishi anomaly in the mixing with operators involving fermions is discussed. A general method for the explicit (numerical) resolution of the operator mixing and the computation of anomalous dimensions is proposed. We then resolve the order g 2 mixing for the 15 (purely scalar) singlet operators of naive dimension Δ 0 =6. Rather surprisingly we find one isolated operator which has a vanishing anomalous dimension up to order g 4 , belonging to an apparently long multiplet. We also solve the order g 2 mixing for the 26 operators belonging to the representation 20' of SU(4). We find an operator with the same one-loop anomalous dimension as the Konishi multiplet

  18. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    International Nuclear Information System (INIS)

    Hasegawa, K.; Lim, C.S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario

  19. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    Science.gov (United States)

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-09-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  20. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    OpenAIRE

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  1. Large transverse momentum phenomena

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1977-09-01

    It is pointed out that it is particularly significant that the quantum numbers of the leading particles are strongly correlated with the quantum numbers of the incident hadrons indicating that the valence quarks themselves are transferred to large p/sub t/. The crucial question is how they get there. Various hadron reactions are discussed covering the structure of exclusive reactions, inclusive reactions, normalization of inclusive cross sections, charge correlations, and jet production at large transverse momentum. 46 references

  2. Global repeat discovery and estimation of genomic copy number in a large, complex genome using a high-throughput 454 sequence survey

    Directory of Open Access Journals (Sweden)

    Varala Kranthi

    2007-05-01

    Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.

  3. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  4. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  5. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  6. Particle creation and particle number in an expanding universe

    International Nuclear Information System (INIS)

    Parker, Leonard

    2012-01-01

    I describe the logical basis of the method that I developed in 1962 and 1963 to define a quantum operator corresponding to the observable particle number of a quantized free scalar field in a spatially-flat isotropically expanding (and/or contracting) universe. This work also showed for the first time that particles were created from the vacuum by the curved spacetime of an expanding spatially-flat Friedmann–Lemaître–Robertson–Walker (FLRW) universe. The same process is responsible for creating the nearly scale-invariant spectrum of quantized perturbations of the inflaton scalar field during the inflationary stage of the expansion of the universe. I explain how the method that I used to obtain the observable particle number operator involved adiabatic invariance of the particle number (hence, the name adiabatic regularization) and the quantum theory of measurement of particle number in an expanding universe. I also show how I was led in a surprising way, to the discovery in 1964 that there would be no particle creation by these spatially-flat FLRW universes for free fields of any integer or half-integer spin satisfying field equations that are invariant under conformal transformations of the metric. The methods I used to define adiabatic regularization for particle number were based on generally-covariant concepts like adiabatic invariance and measurement that were fundamental and determined results that were unique to each given adiabatic order. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  7. Large-scale biophysical evaluation of protein PEGylation effects

    DEFF Research Database (Denmark)

    Vernet, Erik; Popa, Gina; Pozdnyakova, Irina

    2016-01-01

    PEGylation is the most widely used method to chemically modify protein biopharmaceuticals, but surprisingly limited public data is available on the biophysical effects of protein PEGylation. Here we report the first large-scale study, with site-specific mono-PEGylation of 15 different proteins...... of PEGylation on the thermal stability of a protein based on data generated by circular dichroism (CD), differential scanning calorimetry (DSC), or differential scanning fluorimetry (DSF). In addition, DSF was validated as a fast and inexpensive screening method for thermal unfolding studies of PEGylated...... proteins. Multivariate data analysis revealed clear trends in biophysical properties upon PEGylation for a subset of proteins, although no universal trends were found. Taken together, these findings are important in the consideration of biophysical methods and evaluation of second...

  8. Some types of parent number talk count more than others: relations between parents' input and children's cardinal-number knowledge.

    Science.gov (United States)

    Gunderson, Elizabeth A; Levine, Susan C

    2011-09-01

    Before they enter preschool, children vary greatly in their numerical and mathematical knowledge, and this knowledge predicts their achievement throughout elementary school (e.g. Duncan et al., 2007; Ginsburg & Russell, 1981). Therefore, it is critical that we look to the home environment for parental inputs that may lead to these early variations. Recent work has shown that the amount of number talk that parents engage in with their children is robustly related to a critical aspect of mathematical development - cardinal-number knowledge (e.g. knowing that the word 'three' refers to sets of three entities; Levine, Suriyakham, Rowe, Huttenlocher & Gunderson, 2010). The present study characterizes the different types of number talk that parents produce and investigates which types are most predictive of children's later cardinal-number knowledge. We find that parents' number talk involving counting or labeling sets of present, visible objects is related to children's later cardinal-number knowledge, whereas other types of parent number talk are not. In addition, number talk that refers to large sets of present objects (i.e. sets of size 4 to 10 that fall outside children's ability to track individual objects) is more robustly predictive of children's later cardinal-number knowledge than talk about smaller sets. The relation between parents' number talk about large sets of present objects and children's cardinal-number knowledge remains significant even when controlling for factors such as parents' socioeconomic status and other measures of parents' number and non-number talk. © 2011 Blackwell Publishing Ltd.

  9. Pseudohalide (SCN(-))-Doped MAPbI3 Perovskites: A Few Surprises.

    Science.gov (United States)

    Halder, Ansuman; Chulliyil, Ramya; Subbiah, Anand S; Khan, Tuhin; Chattoraj, Shyamtanu; Chowdhury, Arindam; Sarkar, Shaibal K

    2015-09-03

    Pseudohalide thiocyanate anion (SCN(-)) has been used as a dopant in a methylammonium lead tri-iodide (MAPbI3) framework, aiming for its use as an absorber layer for photovoltaic applications. The substitution of SCN(-) pseudohalide anion, as verified using Fourier transform infrared (FT-IR) spectroscopy, results in a comprehensive effect on the optical properties of the original material. Photoluminescence measurements at room temperature reveal a significant enhancement in the emission quantum yield of MAPbI3-x(SCN)x as compared to MAPbI3, suggestive of suppression of nonradiative channels. This increased intensity is attributed to a highly edge specific emission from MAPbI3-x(SCN)x microcrystals as revealed by photoluminescence microscopy. Fluoresence lifetime imaging measurements further established contrasting carrier recombination dynamics for grain boundaries and the bulk of the doped material. Spatially resolved emission spectroscopy on individual microcrystals of MAPbI3-x(SCN)x reveals that the optical bandgap and density of states at various (local) nanodomains are also nonuniform. Surprisingly, several (local) emissive regions within MAPbI3-x(SCN)x microcrystals are found to be optically unstable under photoirradiation, and display unambiguous temporal intermittency in emission (blinking), which is extremely unusual and intriguing. We find diverse blinking behaviors for the undoped MAPbI3 crystals as well, which leads us to speculate that blinking may be a common phenomenon for most hybrid perovskite materials.

  10. Fourier analysis in combinatorial number theory

    International Nuclear Information System (INIS)

    Shkredov, Il'ya D

    2010-01-01

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  11. Fourier analysis in combinatorial number theory

    Energy Technology Data Exchange (ETDEWEB)

    Shkredov, Il' ya D [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)

    2010-09-16

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  12. Surprising structures hiding in Penrose’s future null infinity

    Science.gov (United States)

    Newman, Ezra T.

    2017-07-01

    Since the late1950s, almost all discussions of asymptotically flat (Einstein-Maxwell) space-times have taken place in the context of Penrose’s null infinity, I+. In addition, almost all calculations have used the Bondi coordinate and tetrad systems. Beginning with a known asymptotically flat solution to the Einstein-Maxwell equations, we show first, that there are other natural coordinate systems, near I+, (analogous to light-cones in flat-space) that are based on (asymptotically) shear-free null geodesic congruences (analogous to the flat-space case). Using these new coordinates and their associated tetrad, we define the complex dipole moment, (the mass dipole plus i times angular momentum), from the l  =  1 harmonic coefficient of a component of the asymptotic Weyl tensor. Second, from this definition, from the Bianchi identities and from the Bondi-Sachs mass and linear momentum, we show that there exists a large number of results—identifications and dynamics—identical to those of classical mechanics and electrodynamics. They include, among many others, {P}=M{v}+..., {L}= {r} × {P} , spin, Newton’s second law with the rocket force term (\\dotM v) and radiation reaction, angular momentum conservation and others. All these relations take place in the rather mysterious H-space rather than in space-time. This leads to the enigma: ‘why do these well known relations of classical mechanics take place in H-space?’ and ‘What is the physical meaning of H-space?’

  13. Large Display Interaction Using Mobile Devices

    OpenAIRE

    Bauer, Jens

    2015-01-01

    Large displays become more and more popular, due to dropping prices. Their size and high resolution leverages collaboration and they are capable of dis- playing even large datasets in one view. This becomes even more interesting as the number of big data applications increases. The increased screen size and other properties of large displays pose new challenges to the Human- Computer-Interaction with these screens. This includes issues such as limited scalability to the number of users, diver...

  14. Transitional boundary layer in low-Prandtl-number convection at high Rayleigh number

    Science.gov (United States)

    Schumacher, Joerg; Bandaru, Vinodh; Pandey, Ambrish; Scheel, Janet

    2016-11-01

    The boundary layer structure of the velocity and temperature fields in turbulent Rayleigh-Bénard flows in closed cylindrical cells of unit aspect ratio is revisited from a transitional and turbulent viscous boundary layer perspective. When the Rayleigh number is large enough the boundary layer dynamics at the bottom and top plates can be separated into an impact region of downwelling plumes, an ejection region of upwelling plumes and an interior region (away from side walls) that is dominated by a shear flow of varying orientation. This interior plate region is compared here to classical wall-bounded shear flows. The working fluid is liquid mercury or liquid gallium at a Prandtl number of Pr = 0 . 021 for a range of Rayleigh numbers of 3 ×105 Deutsche Forschungsgemeinschaft.

  15. A Theory of Evolving Natural Constants Based on the Unification of General Theory of Relativity and Dirac's Large Number Hypothesis

    International Nuclear Information System (INIS)

    Peng Huanwu

    2005-01-01

    Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

  16. Nuclear refugees after large radioactive releases

    International Nuclear Information System (INIS)

    Pascucci-Cahen, Ludivine; Groell, Jérôme

    2016-01-01

    However improbable, large radioactive releases from a nuclear power plant would entail major consequences for the surrounding population. In Fukushima, 80,000 people had to evacuate the most contaminated areas around the NPP for a prolonged period of time. These people have been called “nuclear refugees”. The paper first argues that the number of nuclear refugees is a better measure of the severity of radiological consequences than the number of fatalities, although the latter is widely used to assess other catastrophic events such as earthquakes or tsunami. It is a valuable partial indicator in the context of comprehensive studies of overall consequences. Section 2 makes a clear distinction between long-term relocation and emergency evacuation and proposes a method to estimate the number of refugees. Section 3 examines the distribution of nuclear refugees with respect to weather and release site. The distribution is asymmetric and fat-tailed: unfavorable weather can lead to the contamination of large areas of land; large cities have in turn a higher probability of being contaminated. - Highlights: • Number of refugees is a good indicator of the severity of radiological consequences. • It is a better measure of the long-term consequences than the number of fatalities. • A representative meteorological sample should be sufficiently large. • The number of refugees highly depends on the release site in a country like France.

  17. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species

    Directory of Open Access Journals (Sweden)

    Débora Jardim-Messeder

    2017-12-01

    Full Text Available Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion share with non-primates, including artiodactyls (the typical prey of large carnivorans, roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal

  18. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    Science.gov (United States)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  19. Attenuation of contaminant plumes in homogeneous aquifers: Sensitivity to source function at moderate to large peclet numbers

    International Nuclear Information System (INIS)

    Selander, W.N.; Lane, F.E.; Rowat, J.H.

    1995-05-01

    A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs

  20. Hillslope, river, and Mountain: some surprises in Landscape evolution (Ralph Alger Bagnold Medal Lecture)

    Science.gov (United States)

    Tucker, G. E.

    2012-04-01

    Geomorphology, like the rest of geoscience, has always had two major themes: a quest to understand the earth's history and 'products' - its landscapes and seascapes - and, in parallel, a quest to understand its formative processes. This dualism is manifest in the remarkable career of R. A. Bagnold, who was inspired by landforms such as dunes, and dedicated to understanding the physical processes that shaped them. His legacy inspires us to emulate two principles at the heart of his contributions: the benefits of rooting geomorphic theory in basic physics, and the importance of understanding geomorphic systems in terms of simple equations framed around energy or force. Today, following Bagnold's footsteps, the earth-surface process community is engaged in a quest to build, test, and refine an ever-improving body of theory to describe our planet's surface and its evolution. In this lecture, I review a small sample of some of the fruits of that quest, emphasizing the value of surprises encountered along the way. The first example involves models of long-term river incision into bedrock. When the community began to grapple with how to represent this process mathematically, several different ideas emerged. Some were based on the assumption that sediment transport is the limiting factor; others assumed that hydraulic stress on rock is the key, while still others treated rivers as first-order 'reactors.' Thanks in part to advances in digital topography and numerical computing, the predictions of these models can be tested using natural-experiment case studies. Examples from the King Range, USA, the Central Apennines, Italy, and the fold-thrust belt of Taiwan, illustrate that independent knowledge of history and/or tectonics makes it possible to quantify how the rivers have responded to external forcing. Some interesting surprises emerge, such as: that the relief-uplift relationship can be highly nonlinear in a steady-state landscape because of grain-entrainment thresholds

  1. Disordered strictly jammed binary sphere packings attain an anomalously large range of densities

    Science.gov (United States)

    Hopkins, Adam B.; Stillinger, Frank H.; Torquato, Salvatore

    2013-08-01

    Previous attempts to simulate disordered binary sphere packings have been limited in producing mechanically stable, isostatic packings across a broad spectrum of packing fractions. Here we report that disordered strictly jammed binary packings (packings that remain mechanically stable under general shear deformations and compressions) can be produced with an anomalously large range of average packing fractions 0.634≤ϕ≤0.829 for small to large sphere radius ratios α restricted to α≥0.100. Surprisingly, this range of average packing fractions is obtained for packings containing a subset of spheres (called the backbone) that are exactly strictly jammed, exactly isostatic, and also generated from random initial conditions. Additionally, the average packing fractions of these packings at certain α and small sphere relative number concentrations x approach those of the corresponding densest known ordered packings. These findings suggest for entropic reasons that these high-density disordered packings should be good glass formers and that they may be easy to prepare experimentally. We also identify an unusual feature of the packing fraction of jammed backbones (packings with rattlers excluded). The backbone packing fraction is about 0.624 over the majority of the α-x plane, even when large numbers of small spheres are present in the backbone. Over the (relatively small) area of the α-x plane where the backbone is not roughly constant, we find that backbone packing fractions range from about 0.606 to 0.829, with the volume of rattler spheres comprising between 1.6% and 26.9% of total sphere volume. To generate isostatic strictly jammed packings, we use an implementation of the Torquato-Jiao sequential linear programming algorithm [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.82.061302 82, 061302 (2010)], which is an efficient producer of inherent structures (mechanically stable configurations at the local maxima in the density landscape). The identification and

  2. Asymptotic numbers, asymptotic functions and distributions

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1979-07-01

    The asymptotic functions are a new type of generalized functions. But they are not functionals on some space of test-functions as the distributions of Schwartz. They are mappings of the set denoted by A into A, where A is the set of the asymptotic numbers introduced by Christov. On its part A is a totally-ordered set of generalized numbers including the system of real numbers R as well as infinitesimals and infinitely large numbers. Every two asymptotic functions can be multiplied. On the other hand, the distributions have realizations as asymptotic functions in a certain sense. (author)

  3. conference report thai me up, thai me down — the xv ias conference ...

    African Journals Online (AJOL)

    2004-08-01

    Aug 1, 2004 ... makers, network with collaborators, sample the local Asian cuisine, and try to ... Bangkok is hot, bustling and polluted, with traffic from hell; it's also fun, ..... for us were the surprisingly large number of 'data-empty' presentations,.

  4. Number of deaths due to lung diseases: How large is the problem?

    International Nuclear Information System (INIS)

    Wagener, D.K.

    1990-01-01

    The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings

  5. Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm, J.; Andrews, Ph.D.

    2004-01-01

    This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations

  6. Surprising Ripple Effects: How Changing the SAT Score-Sending Policy for Low-Income Students Impacts College Access and Success

    Science.gov (United States)

    Hurwitz, Michael; Mbekeani, Preeya P.; Nipson, Margaret M.; Page, Lindsay C.

    2017-01-01

    Subtle policy adjustments can induce relatively large "ripple effects." We evaluate a College Board initiative that increased the number of free SAT score reports available to low-income students and changed the time horizon for using these score reports. Using a difference-in-differences analytic strategy, we estimate that targeted…

  7. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    Science.gov (United States)

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  8. Analyzing the Large Number of Variables in Biomedical and Satellite Imagery

    CERN Document Server

    Good, Phillip I

    2011-01-01

    This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met

  9. Turbulence, raindrops and the l{sup 1/2} number density law

    Energy Technology Data Exchange (ETDEWEB)

    Lovejoy, S [Department of Physics, McGill University, 3600 University street, Montreal, Quebec, H3A 2T8 (Canada); Schertzer, D [Universite Paris-Est, ENPC/CEREVE, 77455 Marne-la-Vallee Cedex 2 (France)], E-mail: lovejoy@physics.mcgill.ca

    2008-07-15

    Using a unique data set of three-dimensional drop positions and masses (the HYDROP experiment), we show that the distribution of liquid water in rain displays a sharp transition between large scales which follow a passive scalar-like Corrsin-Obukhov (k{sup -5/3}) spectrum and a small-scale statistically homogeneous white noise regime. We argue that the transition scale l{sub c} is the critical scale where the mean Stokes number (= drop inertial time/turbulent eddy time) St{sub l} is unity. For five storms, we found l{sub c} in the range 45-75 cm with the corresponding dissipation scale St{sub {eta}} in the range 200-300. Since the mean interdrop distance was significantly smaller ({approx} 10 cm) than l{sub c} we infer that rain consists of 'patches' whose mean liquid water content is determined by turbulence with each patch being statistically homogeneous. For l>l{sub c}, we have St{sub l}<1 and due to the observed statistical homogeneity for lnumber and mass densities (n and {rho}) and their variance fluxes ({psi} and {chi}). By showing that {chi} is dissipated at small scales (with l{sub {rho}}{sub ,diss}{approx}l{sub c}) and {psi} over a wide range, we conclude that {rho} should indeed follow Corrsin-Obukhov k{sup -5/3} spectra but that n should instead follow a k{sup -2} spectrum corresponding to fluctuations scaling as {delta}{rho}{approx}l{sup 1/3} and {delta}n{approx}l{sup 1/2}. While the Corrsin-Obukhov law has never been observed in rain before, its discovery is perhaps not surprising; in contrast the {delta}n{approx}l{sup 1/2} number density law is quite new. The key difference between the {delta}{rho}, {delta}n laws is the fact that the microphysics (coalescence, breakup) conserves drop mass, but not numbers of particles. This implies that the timescale for the transfer of the

  10. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    Science.gov (United States)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  11. The acoustic environment in large HTGR's

    International Nuclear Information System (INIS)

    Burton, T.E.

    1979-01-01

    Well-known techniques for estimating acoustic vibration of structures have been applied to a General Atomic high-temperature gas-cooled reactor (HTGR) design. It is shown that one must evaluate internal loss factors for both fluid and structure modes, as well as radiation loss factors, to avoid large errors in estimated structural response. At any frequency above 1350 rad/s there are generally at least 20 acoustic modes contributing to acoustic pressure, so statistical energy analysis may be employed. But because the gas circuit consists mainly of high-aspect-ratio cavities, reverberant fields are nowhere isotropic below 7500 rad/s, and in some regions are not isotropic below 60 000 rad/s. In comparison with isotropic reverberant fields, these anistropic fields enhance the radiation efficiencies of some structural modes at low frequencies, but have surprisingly little effect at most frequencies. The efficiency of a dipole sound source depends upon its orientation. (Auth.)

  12. Nature and numbers a mathematical photo shooting

    CERN Document Server

    Glaeser, Georg

    2014-01-01

    The book offers 180 pages of spectacular photos and unusual views and insights. Learn to see the world with different eyes and be prepared for many surprises and new facts. The photos give rise to questions that are carefully explained with mathematics.

  13. Comparative efficacy of tulathromycin versus a combination of florfenicol-oxytetracycline in the treatment of undifferentiated respiratory disease in large numbers of sheep

    Directory of Open Access Journals (Sweden)

    Mohsen Champour

    2015-09-01

    Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284

  14. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  15. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  16. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    Science.gov (United States)

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  17. Efficient high speed communications over electrical powerlines for a large number of users

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering

    2007-07-01

    Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.

  18. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  19. Prandtl-number Effects in High-Rayleigh-number Spherical Convection

    Science.gov (United States)

    Orvedahl, Ryan J.; Calkins, Michael A.; Featherstone, Nicholas A.; Hindman, Bradley W.

    2018-03-01

    Convection is the predominant mechanism by which energy and angular momentum are transported in the outer portion of the Sun. The resulting overturning motions are also the primary energy source for the solar magnetic field. An accurate solar dynamo model therefore requires a complete description of the convective motions, but these motions remain poorly understood. Studying stellar convection numerically remains challenging; it occurs within a parameter regime that is extreme by computational standards. The fluid properties of the convection zone are characterized in part by the Prandtl number \\Pr = ν/κ, where ν is the kinematic viscosity and κ is the thermal diffusion; in stars, \\Pr is extremely low, \\Pr ≈ 10‑7. The influence of \\Pr on the convective motions at the heart of the dynamo is not well understood since most numerical studies are limited to using \\Pr ≈ 1. We systematically vary \\Pr and the degree of thermal forcing, characterized through a Rayleigh number, to explore its influence on the convective dynamics. For sufficiently large thermal driving, the simulations reach a so-called convective free-fall state where diffusion no longer plays an important role in the interior dynamics. Simulations with a lower \\Pr generate faster convective flows and broader ranges of scales for equivalent levels of thermal forcing. Characteristics of the spectral distribution of the velocity remain largely insensitive to changes in \\Pr . Importantly, we find that \\Pr plays a key role in determining when the free-fall regime is reached by controlling the thickness of the thermal boundary layer.

  20. Numerical analysis of jet impingement heat transfer at high jet Reynolds number and large temperature difference

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2013-01-01

    was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....

  1. AN ANALYSIS OF NUMBER SENSE AND MENTAL COMPUTATION IN THE LEARNING OF MATHEMATICS

    Directory of Open Access Journals (Sweden)

    Parmit Singh Aperapar

    2011-04-01

    Full Text Available The purpose of this research was to assess students’ understanding of number sense and mental computation among Form One, Form Two, Form Three and Form Four students. A total of 1756 students, ages ranging from 12 to 17 years, from thirteen schools in Selangor participated in this study. A majority (74.9% of these students obtained an A grade for their respective year-end school examinations. The design for this study was quantitative in nature where the data on student’s sense of numbers was collected using two instruments, namely, Number Sense Test and Mental Computation Test. Each of these instruments consisted of 50 and 45 items respectively. The results from this study indicate that students were not able to cope to the Number Sense Test as compared to the Mental Computation Test. The former unveils a low percentage of 37.3% to 47.7% as compared to the latter of 79% to 88.6% across the levels. In the number Sense Test, surprisingly, there was no significant difference in the results between Form 1 students and Form 2 students and also between Form 3 students and Form 4 students. This seems to indicate that as the number of years in schools increase, there is an increasing reliance on algorithm and procedures. Although in the literature it has been argued that including mental computation in a mathematics curriculum promotes number sense (McIntosh et. al., 1997; Reys, Reys, Nohda, & Emori, 2005, this was not the case in this study. It seems that an over reliance on paper and pencil computation at the expense of intuitive understanding of numbers is taking place among these students.

  2. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    International Nuclear Information System (INIS)

    Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.

    2011-01-01

    Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  3. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)

    2011-07-15

    Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  4. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    Science.gov (United States)

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. From Calculus to Number Theory

    Indian Academy of Sciences (India)

    A. Raghuram

    2016-11-04

    Nov 4, 2016 ... diverges to infinity. This means given any number M, however large, we can add sufficiently many terms in the above series to make the sum larger than M. This was first proved by Nicole Oresme (1323-1382), a brilliant. French philosopher of his times.

  6. Probing Critical Point Energies of Transition Metal Dichalcogenides: Surprising Indirect Gap of Single Layer WSe 2

    KAUST Repository

    Zhang, Chendong; Chen, Yuxuan; Johnson, Amber; Li, Ming-yang; Li, Lain-Jong; Mende, Patrick C.; Feenstra, Randall M.; Shih, Chih Kang

    2015-01-01

    By using a comprehensive form of scanning tunneling spectroscopy, we have revealed detailed quasi-particle electronic structures in transition metal dichalcogenides, including the quasi-particle gaps, critical point energy locations, and their origins in the Brillouin zones. We show that single layer WSe surprisingly has an indirect quasi-particle gap with the conduction band minimum located at the Q-point (instead of K), albeit the two states are nearly degenerate. We have further observed rich quasi-particle electronic structures of transition metal dichalcogenides as a function of atomic structures and spin-orbit couplings. Such a local probe for detailed electronic structures in conduction and valence bands will be ideal to investigate how electronic structures of transition metal dichalcogenides are influenced by variations of local environment.

  7. Probing Critical Point Energies of Transition Metal Dichalcogenides: Surprising Indirect Gap of Single Layer WSe 2

    KAUST Repository

    Zhang, Chendong

    2015-09-21

    By using a comprehensive form of scanning tunneling spectroscopy, we have revealed detailed quasi-particle electronic structures in transition metal dichalcogenides, including the quasi-particle gaps, critical point energy locations, and their origins in the Brillouin zones. We show that single layer WSe surprisingly has an indirect quasi-particle gap with the conduction band minimum located at the Q-point (instead of K), albeit the two states are nearly degenerate. We have further observed rich quasi-particle electronic structures of transition metal dichalcogenides as a function of atomic structures and spin-orbit couplings. Such a local probe for detailed electronic structures in conduction and valence bands will be ideal to investigate how electronic structures of transition metal dichalcogenides are influenced by variations of local environment.

  8. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  9. Virtual Volatility, an Elementary New Concept with Surprising Stock Market Consequences

    Science.gov (United States)

    Prange, Richard; Silva, A. Christian

    2006-03-01

    Textbook investors start by predicting the future price distribution, PDF, of a candidate stock (or portfolio) at horizon T, e.g. a year hence. A (log)normal PDF with center (=drift =expected return) μT and width (=volatility) σT is often assumed on Central Limit Theorem grounds, i.e. by a random walk of daily (log)price increments δs. The standard deviation, stdev, of historical (ex post) δs `s is usually a fair predictor of the coming year's (ex ante) stdev(δs) = σdaily, but the historical mean E(δs) at best roughly limits the true, to be predicted, drift by μtrueT˜ μhistT ± σhistT. Textbooks take a PDF with σ ˜ σdaily and μ as somehow known, as if accurate predictions of μ were possible. It is elementary and presumably new to argue that an average of PDF's over a range of μ values should be taken, e.g. an average over forecasts by different analysts. We estimate that this leads to a PDF with a `virtual' volatility σ ˜ 1.3σdaily. It is indeed clear that uncertainty in the value of the expected gain parameter increases the risk of investment in that security by most measures, e. g. Sharpe's ratio μT/σT will be 30% smaller because of this effect. It is significant and surprising that there are investments which benefit from this 30% virtual increase in the volatility

  10. Selecting the Number of Principal Components in Functional Data

    KAUST Repository

    Li, Yehua

    2013-12-01

    Functional principal component analysis (FPCA) has become the most widely used dimension reduction tool for functional data analysis. We consider functional data measured at random, subject-specific time points, contaminated with measurement error, allowing for both sparse and dense functional data, and propose novel information criteria to select the number of principal component in such data. We propose a Bayesian information criterion based on marginal modeling that can consistently select the number of principal components for both sparse and dense functional data. For dense functional data, we also develop an Akaike information criterion based on the expected Kullback-Leibler information under a Gaussian assumption. In connecting with the time series literature, we also consider a class of information criteria proposed for factor analysis of multivariate time series and show that they are still consistent for dense functional data, if a prescribed undersmoothing scheme is undertaken in the FPCA algorithm. We perform intensive simulation studies and show that the proposed information criteria vastly outperform existing methods for this type of data. Surprisingly, our empirical evidence shows that our information criteria proposed for dense functional data also perform well for sparse functional data. An empirical example using colon carcinogenesis data is also provided to illustrate the results. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  11. Interaction between numbers and size during visual search

    NARCIS (Netherlands)

    Krause, F.; Bekkering, H.; Pratt, J.; Lindemann, O.

    2017-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit

  12. Fundamental surprise in the application of airpower

    Science.gov (United States)

    2017-05-25

    explain transformations in scientific research proposed by Thomas Kuhn in his book “The Structure of Scientific Revolutions." Kuhn proposed the idea...that the accepted traditions of scientific research within a particular community, known as a paradigm, provide the tools to perform "normal science...the large-scale attacks on Lebanese infrastructure would have on the regime, this concept was a non- starter . The order issued to the IDF on 12 July

  13. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    Science.gov (United States)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  14. Large SNP arrays for genotyping in crop plants

    Indian Academy of Sciences (India)

    Genotyping with large numbers of molecular markers is now an indispensable tool within plant genetics and breeding. Especially through the identification of large numbers of single nucleotide polymorphism (SNP) markers using the novel high-throughput sequencing technologies, it is now possible to reliably identify many ...

  15. A conceptual geochemical model of the geothermal system at Surprise Valley, CA

    Science.gov (United States)

    Fowler, Andrew P. G.; Ferguson, Colin; Cantwell, Carolyn A.; Zierenberg, Robert A.; McClain, James; Spycher, Nicolas; Dobson, Patrick

    2018-03-01

    Characterizing the geothermal system at Surprise Valley (SV), northeastern California, is important for determining the sustainability of the energy resource, and mitigating hazards associated with hydrothermal eruptions that last occurred in 1951. Previous geochemical studies of the area attempted to reconcile different hot spring compositions on the western and eastern sides of the valley using scenarios of dilution, equilibration at low temperatures, surface evaporation, and differences in rock type along flow paths. These models were primarily supported using classical geothermometry methods, and generally assumed that fluids in the Lake City mud volcano area on the western side of the valley best reflect the composition of a deep geothermal fluid. In this contribution, we address controls on hot spring compositions using a different suite of geochemical tools, including optimized multicomponent geochemistry (GeoT) models, hot spring fluid major and trace element measurements, mineralogical observations, and stable isotope measurements of hot spring fluids and precipitated carbonates. We synthesize the results into a conceptual geochemical model of the Surprise Valley geothermal system, and show that high-temperature (quartz, Na/K, Na/K/Ca) classical geothermometers fail to predict maximum subsurface temperatures because fluids re-equilibrated at progressively lower temperatures during outflow, including in the Lake City area. We propose a model where hot spring fluids originate as a mixture between a deep thermal brine and modern meteoric fluids, with a seasonally variable mixing ratio. The deep brine has deuterium values at least 3 to 4‰ lighter than any known groundwater or high-elevation snow previously measured in and adjacent to SV, suggesting it was recharged during the Pleistocene when meteoric fluids had lower deuterium values. The deuterium values and compositional characteristics of the deep brine have only been identified in thermal springs and

  16. Chandra Finds Surprising Black Hole Activity In Galaxy Cluster

    Science.gov (United States)

    2002-09-01

    Scientists at the Carnegie Observatories in Pasadena, California, have uncovered six times the expected number of active, supermassive black holes in a single viewing of a cluster of galaxies, a finding that has profound implications for theories as to how old galaxies fuel the growth of their central black holes. The finding suggests that voracious, central black holes might be as common in old, red galaxies as they are in younger, blue galaxies, a surprise to many astronomers. The team made this discovery with NASA'S Chandra X-ray Observatory. They also used Carnegie's 6.5-meter Walter Baade Telescope at the Las Campanas Observatory in Chile for follow-up optical observations. "This changes our view of galaxy clusters as the retirement homes for old and quiet black holes," said Dr. Paul Martini, lead author on a paper describing the results that appears in the September 10 issue of The Astrophysical Journal Letters. "The question now is, how do these black holes produce bright X-ray sources, similar to what we see from much younger galaxies?" Typical of the black hole phenomenon, the cores of these active galaxies are luminous in X-ray radiation. Yet, they are obscured, and thus essentially undetectable in the radio, infrared and optical wavebands. "X rays can penetrate obscuring gas and dust as easily as they penetrate the soft tissue of the human body to look for broken bones," said co-author Dr. Dan Kelson. "So, with Chandra, we can peer through the dust and we have found that even ancient galaxies with 10-billion-year-old stars can have central black holes still actively pulling in copious amounts of interstellar gas. This activity has simply been hidden from us all this time. This means these galaxies aren't over the hill after all and our theories need to be revised." Scientists say that supermassive black holes -- having the mass of millions to billions of suns squeezed into a region about the size of our Solar System -- are the engines in the cores of

  17. MPQS with three large primes

    NARCIS (Netherlands)

    Leyland, P.; Lenstra, A.K.; Dodson, B.; Muffett, A.; Wagstaff, S.; Fieker, C.; Kohel, D.R.

    2002-01-01

    We report the factorization of a 135-digit integer by the triple-large-prime variation of the multiple polynomial quadratic sieve. Previous workers [6][10] had suggested that using more than two large primes would be counterproductive, because of the greatly increased number of false reports from

  18. Charge-density waves in alpha-uranium: A story of endless surprises

    International Nuclear Information System (INIS)

    Lander, G.H.

    1982-01-01

    The properties of element 92, uranium at low temperature have remained an enigma since major anomalies in almost all physical property measurements were first reported over twenty years ago. By far the most dramatic measurements were those by Fisher on the elastic constants, which strongly suggested a structural phase transition at approx. equal to43 K. Initially no such phase transition was found. Recently, neutron inelastic experiments at Oak Ridge mapped out the phonon dispersion curves at room temperature, and in the process discovered an anomalous soft phonon of Σ 4 symmetry along the [100] axis. On cooling, weak satellites were found to form near the position [0.5, 0.0] thus signaling a periodic distortion. However, such a charge-density wave appeared to have a complex wave vector relationship with the fundamental lattice, leading the authors to introduce a two-phase model for the phase transition. Simultaneously, by using photographic technique designed to view large segments of reciprocal space, Marmeggi and Delapalme at the ILL discovered a completely new set of satellite reflections, indexable with wave vector [0.5, qsub(y), qsub(z)], where qsub(y) and qsub(z) are incommensurable (approx. equal to0.18), not equal, and vary with temperature. We have now measured the intensities of a great number of these new satellites and been able to fit the results with a modulated α-U structure. The atoms are displaced in all three independent crystallographic directions according to a sinusoidal wave form. The overall agreement between the predicted and observed structure factors is excellent, suggesting that at least the static positions of the atoms at low temperature in this element are now understood. In this review the status of research on the structural phase transition will be presented. Neither the full details of the phase transition nor the reasons for it are understood at this time. A number of further experiments are suggested. (orig.)

  19. The necessity of and policy suggestions for implementing a limited number of large scale, fully integrated CCS demonstrations in China

    International Nuclear Information System (INIS)

    Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou

    2011-01-01

    CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.

  20. The proceedings of the 1st international workshop on laboratory astrophysics experiments with large lasers

    International Nuclear Information System (INIS)

    Remington, B.A.; Goldstein, W.H.

    1996-01-01

    The world has stood witness to the development of a number of highly sophisticated and flexible, high power laser facilities (energies up to 50 kJ and powers up to 50 TW), driven largely by the world-wide effort in inertial confinement fusion (ICF). The charter of diagnosing implosions with detailed, quantitative measurements has driven the ICF laser facilities to be exceedingly versatile and well equipped with diagnostics. Interestingly, there is considerable overlap in the physics of ICF and astrophysics. Both typically involve compressible radiative hydrodynamics, radiation transport, complex opacities, and equations of state of dense matter. Surprisingly, however, there has been little communication between these two communities to date. With the recent declassification of ICF in the USA, and the approval to commence with construction of the next generation ''superlasers'', the 2 MJ National Ignition Facility in the US, and its equivalent, the LMJ laser in France, the situation is ripe for change. . Given the physics similarities that exist between ICF and astrophysics, one strongly suspects that there should exist regions of overlap where supporting research on the large lasers could be beneficial to the astrophysics community. As a catalyst for discussions to this end, Lawrence Livermore National Laboratory sponsored this workshop. Approximately 100 scientists attended from around the world, representing eight countries: the USA, Canada, UK, France, Germany, Russia, Japan, and Israel. A total of 30 technical papers were presented. The two day workshop was divided into four sessions, focusing on nonlinear hydrodynamics, radiative hydrodynamics, radiation transport, and atomic physics-opacities. Copies of the presentations are contained in these proceedings

  1. The proceedings of the 1st international workshop on laboratory astrophysics experiments with large lasers

    Energy Technology Data Exchange (ETDEWEB)

    Remington, B.A.; Goldstein, W.H. [eds.

    1996-08-09

    The world has stood witness to the development of a number of highly sophisticated and flexible, high power laser facilities (energies up to 50 kJ and powers up to 50 TW), driven largely by the world-wide effort in inertial confinement fusion (ICF). The charter of diagnosing implosions with detailed, quantitative measurements has driven the ICF laser facilities to be exceedingly versatile and well equipped with diagnostics. Interestingly, there is considerable overlap in the physics of ICF and astrophysics. Both typically involve compressible radiative hydrodynamics, radiation transport, complex opacities, and equations of state of dense matter. Surprisingly, however, there has been little communication between these two communities to date. With the recent declassification of ICF in the USA, and the approval to commence with construction of the next generation ``superlasers``, the 2 MJ National Ignition Facility in the US, and its equivalent, the LMJ laser in France, the situation is ripe for change. . Given the physics similarities that exist between ICF and astrophysics, one strongly suspects that there should exist regions of overlap where supporting research on the large lasers could be beneficial to the astrophysics community. As a catalyst for discussions to this end, Lawrence Livermore National Laboratory sponsored this workshop. Approximately 100 scientists attended from around the world, representing eight countries: the USA, Canada, UK, France, Germany, Russia, Japan, and Israel. A total of 30 technical papers were presented. The two day workshop was divided into four sessions, focusing on nonlinear hydrodynamics, radiative hydrodynamics, radiation transport, and atomic physics-opacities. Copies of the presentations are contained in these proceedings.

  2. How Gamification Affects Physical Activity: Large-scale Analysis of Walking Challenges in a Mobile Application.

    Science.gov (United States)

    Shameli, Ali; Althoff, Tim; Saberi, Amin; Leskovec, Jure

    2017-04-01

    Gamification represents an effective way to incentivize user behavior across a number of computing applications. However, despite the fact that physical activity is essential for a healthy lifestyle, surprisingly little is known about how gamification and in particular competitions shape human physical activity. Here we study how competitions affect physical activity. We focus on walking challenges in a mobile activity tracking application where multiple users compete over who takes the most steps over a predefined number of days. We synthesize our findings in a series of game and app design implications. In particular, we analyze nearly 2,500 physical activity competitions over a period of one year capturing more than 800,000 person days of activity tracking. We observe that during walking competitions, the average user increases physical activity by 23%. Furthermore, there are large increases in activity for both men and women across all ages, and weight status, and even for users that were previously fairly inactive. We also find that the composition of participants greatly affects the dynamics of the game. In particular, if highly unequal participants get matched to each other, then competition suffers and the overall effect on the physical activity drops significantly. Furthermore, competitions with an equal mix of both men and women are more effective in increasing the level of activities. We leverage these insights to develop a statistical model to predict whether or not a competition will be particularly engaging with significant accuracy. Our models can serve as a guideline to help design more engaging competitions that lead to most beneficial behavioral changes.

  3. Cerebral metastasis masquerading as cerebritis: A case of misguiding history and radiological surprise!

    Directory of Open Access Journals (Sweden)

    Ashish Kumar

    2013-01-01

    Full Text Available Cerebral metastases usually have a characteristic radiological appearance. They can be differentiated rather easily from any infective etiology. Similarly, positive medical history also guides the neurosurgeon towards the possible diagnosis and adds to the diagnostic armamentarium. However, occasionally, similarities on imaging may be encountered where even history could lead us in the wrong direction and tends to bias the clinician. We report a case of a 40-year-old female with a history of mastoidectomy for otitis media presenting to us with a space occupying lesion in the right parietal region, which was thought pre-operatively as an abscess along with the cerebritis. Surprisingly, the histopathology proved it to be a metastatic adenocarcinoma. Hence, a ring enhancing lesion may be a high grade neoplasm/metastasis/abscess, significant gyral enhancement; a feature of cerebritis is not linked with a neoplastic etiology more often. This may lead to delayed diagnosis, incorrect prognostication and treatment in patients having coincidental suggestive history of infection. We review the literature and highlight the key points helping to differentiate an infective from a neoplastic pathology which may look similar at times.

  4. Gravity Cutoff in Theories with Large Discrete Symmetries

    International Nuclear Information System (INIS)

    Dvali, Gia; Redi, Michele; Sibiryakov, Sergey; Vainshtein, Arkady

    2008-01-01

    We set an upper bound on the gravitational cutoff in theories with exact quantum numbers of large N periodicity, such as Z N discrete symmetries. The bound stems from black hole physics. It is similar to the bound appearing in theories with N particle species, though a priori, a large discrete symmetry does not imply a large number of species. Thus, there emerges a potentially wide class of new theories that address the hierarchy problem by lowering the gravitational cutoff due to the existence of large Z 10 32 -type symmetries

  5. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  6. Segmentation, Diarization and Speech Transcription: Surprise Data Unraveled

    NARCIS (Netherlands)

    Huijbregts, M.A.H.

    2008-01-01

    In this thesis, research on large vocabulary continuous speech recognition for unknown audio conditions is presented. For automatic speech recognition systems based on statistical methods, it is important that the conditions of the audio used for training the statistical models match the conditions

  7. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    Science.gov (United States)

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  8. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  9. Number Line Estimation: The Use of Number Line Magnitude Estimation to Detect the Presence of Math Disability in Postsecondary Students

    Science.gov (United States)

    McDonald, Steven A.

    2010-01-01

    This study arose from an interest in the possible presence of mathematics disabilities among students enrolled in the developmental math program at a large university in the Mid-Atlantic region. Research in mathematics learning disabilities (MLD) has included a focus on the construct of working memory and number sense. A component of number sense…

  10. Investor Reaction to Market Surprises on the Istanbul Stock Exchange Investor Reaction to Market Surprises on the Istanbul Stock Exchange = İstanbul Menkul Kıymetler Borsasında Piyasa Sürprizlerine Yatırımcı Tepkisi

    Directory of Open Access Journals (Sweden)

    Yaman Ömer ERZURUMLU

    2011-08-01

    Full Text Available This paper examines the reaction of investors to the arrival of unexpected information on the Istanbul Stock Exchange. The empirical results suggest that the investor reaction following unexpected news on the ISE100 is consistent with Overreaction Hypothesis especially after unfavorable market surprises. Interestingly such pattern does not exist for ISE30 index which includes more liquid and informationally efficient securities. A possible implication of this study for investors is that employing a semi contrarian investment strategy of buying losers in ISE100 may generate superior returns. Moreover, results are supportive of the last regulation change of Capital Market Board of Turkey which mandates more disclosure regarding the trading of less liquid stocks with lower market capitalization.

  11. Strategies in filtering in the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2000-01-01

    textabstractA critical step when factoring large integers by the Number Field Sieve consists of finding dependencies in a huge sparse matrix over the field GF(2), using a Block Lanczos algorithm. Both size and weight (the number of non-zero elements) of the matrix critically affect the running time

  12. Innate or Acquired? – Disentangling Number Sense and Early Number Competencies

    Directory of Open Access Journals (Sweden)

    Julia Siemann

    2018-04-01

    Full Text Available The clinical profile termed developmental dyscalculia (DD is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se. To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD.

  13. Innate or Acquired? – Disentangling Number Sense and Early Number Competencies

    Science.gov (United States)

    Siemann, Julia; Petermann, Franz

    2018-01-01

    The clinical profile termed developmental dyscalculia (DD) is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se. To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD. PMID:29725316

  14. Experimental determination of Ramsey numbers.

    Science.gov (United States)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  15. Using Java to visualize and manipulate large arrays of neutron scattering data

    International Nuclear Information System (INIS)

    Mikkelson, D.; Worlton, T.; Chatterjee, A.; Hammonds, J.; Chen, D.

    2000-01-01

    The Intense Pulsed Neutron Source at Argonne National Laboratory is a world class pulsed neutron source with thirteen instruments designed to characterize materials using time-of-flight neutron scattering techniques. For each instrument, a collimated pulse of neutrons is directed to a material sample. The neutrons are scattered by the sample and detected by arrays of detectors. The type, number and arrangement of detectors vary widely from instrument to instrument, depending on which properties of materials are being studied. In all cases, the faster, higher energy neutrons reach the detectors sooner than the lower energy neutrons. This produces a time-of-flight spectrum at each detector element. The time-of-flight spectrum produced by each detector element records the scattering intensity at hundreds to thousands of discrete time intervals. Since there are typically between two hundred and ten thousand distinct detector elements, a single set of raw data can include millions of points. Often many such datasets are collected for a single sample to determine the effect of different conditions on the microscopic structure and dynamics of the sample. In this project, Java was used to construct a portable highly interactive system for viewing and operating on large collections of time-of-flight spectra. Java performed surprisingly well in handling large amounts of data quickly was fast enough even with standard PC hardware. Although Java may not be the choice at this time for applications where computational efficiency is the primary refinement, any disadvantages in this case were outweighed by the advantages of a clean object oriented language with a portable set of GUI components. The authors anticipate that Java will prove useful for scientific computing and data visualization in situations where portability, case of use and effective use of software development manpower are critical

  16. Classical theory of algebraic numbers

    CERN Document Server

    Ribenboim, Paulo

    2001-01-01

    Gauss created the theory of binary quadratic forms in "Disquisitiones Arithmeticae" and Kummer invented ideals and the theory of cyclotomic fields in his attempt to prove Fermat's Last Theorem These were the starting points for the theory of algebraic numbers, developed in the classical papers of Dedekind, Dirichlet, Eisenstein, Hermite and many others This theory, enriched with more recent contributions, is of basic importance in the study of diophantine equations and arithmetic algebraic geometry, including methods in cryptography This book has a clear and thorough exposition of the classical theory of algebraic numbers, and contains a large number of exercises as well as worked out numerical examples The Introduction is a recapitulation of results about principal ideal domains, unique factorization domains and commutative fields Part One is devoted to residue classes and quadratic residues In Part Two one finds the study of algebraic integers, ideals, units, class numbers, the theory of decomposition, iner...

  17. A large electrically excited synchronous generator

    DEFF Research Database (Denmark)

    2014-01-01

    This invention relates to a large electrically excited synchronous generator (100), comprising a stator (101), and a rotor or rotor coreback (102) comprising an excitation coil (103) generating a magnetic field during use, wherein the rotor or rotor coreback (102) further comprises a plurality...... adjacent neighbouring poles. In this way, a large electrically excited synchronous generator (EESG) is provided that readily enables a relatively large number of poles, compared to a traditional EESG, since the excitation coil in this design provides MMF for all the poles, whereas in a traditional EESG...... each pole needs its own excitation coil, which limits the number of poles as each coil will take up too much space between the poles....

  18. CopyNumber450kCancer: baseline correction for accurate copy number calling from the 450k methylation array.

    Science.gov (United States)

    Marzouka, Nour-Al-Dain; Nordlund, Jessica; Bäcklin, Christofer L; Lönnerholm, Gudmar; Syvänen, Ann-Christine; Carlsson Almlöf, Jonas

    2016-04-01

    The Illumina Infinium HumanMethylation450 BeadChip (450k) is widely used for the evaluation of DNA methylation levels in large-scale datasets, particularly in cancer. The 450k design allows copy number variant (CNV) calling using existing bioinformatics tools. However, in cancer samples, numerous large-scale aberrations cause shifting in the probe intensities and thereby may result in erroneous CNV calling. Therefore, a baseline correction process is needed. We suggest the maximum peak of probe segment density to correct the shift in the intensities in cancer samples. CopyNumber450kCancer is implemented as an R package. The package with examples can be downloaded at http://cran.r-project.org nour.marzouka@medsci.uu.se Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  19. Rhythmic changes in synapse numbers in Drosophila melanogaster motor terminals.

    Directory of Open Access Journals (Sweden)

    Santiago Ruiz

    Full Text Available Previous studies have shown that the morphology of the neuromuscular junction of the flight motor neuron MN5 in Drosophila melanogaster undergoes daily rhythmical changes, with smaller synaptic boutons during the night, when the fly is resting, than during the day, when the fly is active. With electron microscopy and laser confocal microscopy, we searched for a rhythmic change in synapse numbers in this neuron, both under light:darkness (LD cycles and constant darkness (DD. We expected the number of synapses to increase during the morning, when the fly has an intense phase of locomotion activity under LD and DD. Surprisingly, only our DD data were consistent with this hypothesis. In LD, we found more synapses at midnight than at midday. We propose that under LD conditions, there is a daily rhythm of formation of new synapses in the dark phase, when the fly is resting, and disassembly over the light phase, when the fly is active. Several parameters appeared to be light dependent, since they were affected differently under LD or DD. The great majority of boutons containing synapses had only one and very few had either two or more, with a 70∶25∶5 ratio (one, two and three or more synapses in LD and 75∶20∶5 in DD. Given the maintenance of this proportion even when both bouton and synapse numbers changed with time, we suggest that there is a homeostatic mechanism regulating synapse distribution among MN5 boutons.

  20. Positive interactions between large herbivores and grasshoppers, and their consequences for grassland plant diversity.

    Science.gov (United States)

    Zhong, Zhiwei; Wang, Deli; Zhu, Hui; Wang, Ling; Feng, Chao; Wang, Zhongnan

    2014-04-01

    Although the influence of positive interactions on plant and sessile communities has been well documented, surprisingly little is known about their role in structuring terrestrial animal communities. We evaluated beneficial interactions between two distantly related herbivore taxa, large vertebrate grazers (sheep) and smaller insect grazers (grasshoppers), using a set of field experiments in eastern Eurasian steppe of China. Grazing by large herbivores caused significantly higher grasshopper density, and this pattern persisted until the end of the experiment. Grasshoppers, in turn, increased the foraging time of larger herbivores, but such response occurred only during the peak of growing season (August). These reciprocal interactions were driven by differential herbivore foraging preferences for plant resources; namely, large herbivores preferred Artemisia forbs, whereas grasshoppers preferred Leymus grass. The enhancement of grasshopper density in areas grazed by large herbivores likely resulted from the selective consumption of Artemisia forbs by vertebrate grazers, which may potentially improve the host finding of grasshoppers. Likewise, grasshoppers appeared to benefit large herbivores by decreasing the cover and density of the dominant grass Leymus chinensis, which hampers large herbivores' access to palatable forbs. Moreover, we found that large herbivores grazing alone may significantly decrease plant diversity, yet grasshoppers appeared to mediate such negative effects when they grazed with large herbivores. Our results suggest that the positive, reciprocal interactions in terrestrial herbivore communities may be more prevalent and complex than previously thought.

  1. Production of Numbers about the Future

    DEFF Research Database (Denmark)

    Huikku, Jari; Mouritsen, Jan; Silvola, Hanna

    of prominent Finnish business managers, auditors, analysts, investors, financial supervisory authority, academics and media, the paper extends prior research which has used large data. The paper analyses impairment testing as a process where network of human and non-human actors produce numbers about...

  2. Long-term changes in nutrients and mussel stocks are related to numbers of breeding eiders Somateria mollissima at a large Baltic colony.

    Directory of Open Access Journals (Sweden)

    Karsten Laursen

    Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.

  3. Analogies and surprising differences between recombinant nitric oxide synthase-like proteins from Staphylococcus aureus and Bacillus anthracis in their interactions with l-arginine analogs and iron ligands.

    Science.gov (United States)

    Salard, Isabelle; Mercey, Emilie; Rekka, Eleni; Boucher, Jean-Luc; Nioche, Pierre; Mikula, Ivan; Martasek, Pavel; Raman, C S; Mansuy, Daniel

    2006-12-01

    Genome sequencing has recently shown the presence of genes coding for NO-synthase (NOS)-like proteins in bacteria. The roles of these proteins remain unclear. The interactions of a series of l-arginine (l-arg) analogs and iron ligands with two recombinant NOS-like proteins from Staphylococcus aureus (saNOS) and Bacillus anthracis (baNOS) have been studied by UV-visible spectroscopy. SaNOS and baNOS in their ferric native state, as well as their complexes with l-arg analogs and with various ligands, exhibit spectral characteristics highly similar to the corresponding complexes of heme-thiolate proteins such as cytochromes P450 and NOSs. However, saNOS greatly differs from baNOS at the level of three main properties: (i) native saNOS mainly exists under an hexacoordinated low-spin ferric state whereas native baNOS is mainly high-spin, (ii) the addition of tetrahydrobiopterin (H4B) or H4B analogs leads to an increase of the affinity of l-arg for saNOS but not for baNOS, and (iii) saNOS Fe(II), contrary to baNOS, binds relatively bulky ligands such as nitrosoalkanes and tert-butylisocyanide. Thus, saNOS exhibits properties very similar to those of the oxygenase domain of inducible NOS (iNOS(oxy)) not containing H4B, as expected for a NOSoxy-like protein that does not contain H4B. By contrast, the properties of baNOS which look like those of H4B-containing iNOS(oxy) are unexpected for a NOS-like protein not containing H4B. The origin of these surprising properties of baNOS remains to be determined.

  4. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    Science.gov (United States)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-05-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  5. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    Science.gov (United States)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  6. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  7. Laboratory Study of Magnetorotational Instability and Hydrodynamic Stability at Large Reynolds Numbers

    Science.gov (United States)

    Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.

    2006-01-01

    Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.

  8. On the strong law of large numbers for $\\varphi$-subgaussian random variables

    OpenAIRE

    Zajkowski, Krzysztof

    2016-01-01

    For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...

  9. Communication Management and Trust: Their Role in Building Resilience to "Surprises" Such As Natural Disasters, Pandemic Flu, and Terrorism

    Directory of Open Access Journals (Sweden)

    P. H. Longstaff

    2008-06-01

    Full Text Available In times of public danger such as natural disasters and health emergencies, a country's communication systems will be some of its most important assets because access to information will make individuals and groups more resilient. Communication by those charged with dealing with the situation is often critical. We analyzed reports from a wide variety of crisis incidents and found a direct correlation between trust and an organization's preparedness and internal coordination of crisis communication and the effectiveness of its leadership. Thus, trust is one of the most important variables in effective communication management in times of "surprise."

  10. Advanced manipulator system for large hot cells

    International Nuclear Information System (INIS)

    Vertut, J.; Moreau, C.; Brossard, J.P.

    1981-01-01

    Large hot cells can be approached as extrapolated from smaller ones as wide, higher or longer in size with the same concept of using mechanical master slave manipulators and high density windows. This concept leads to a large number of working places and corresponding equipments, with a number of penetrations through the biological protection. When the large cell does not need a permanent operation of number of work places, as in particular to serve PIE machines and maintain the facility, use of servo manipulators with a large supporting unit and extensive use of television appears optimal. The advance on MA 23 and supports will be described including the extra facilities related to manipulators introduction and maintenance. The possibility to combine a powered manipulator and MA 23 (single or pair) on the same boom crane system will be described. An advance control system to bring the minimal dead time to control support movement, associated to the master slave arm operation is under development. The general television system includes over view cameras, associated with the limited number of windows, and manipulators camera. A special new system will be described which brings an automatic control of manipulator cameras and saves operator load and dead time. Full scale tests with MA 23 and support will be discussed. (author)

  11. Three-Dimensional Interaction of a Large Number of Dense DEP Particles on a Plane Perpendicular to an AC Electrical Field

    Directory of Open Access Journals (Sweden)

    Chuanchuan Xie

    2017-01-01

    Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.

  12. Dam risk reduction study for a number of large tailings dams in Ontario

    Energy Technology Data Exchange (ETDEWEB)

    Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)

    2009-07-01

    This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.

  13. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, J; Ma, L [Department of Radiation Oncology, University of California San Francisco School of Medicine, San Francisco, CA (United States)

    2015-06-15

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.

  14. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    International Nuclear Information System (INIS)

    Chiu, J; Ma, L

    2015-01-01

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume

  15. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  16. The natural number bias and its role in rational number understanding in children with dyscalculia. Delay or deficit?

    Science.gov (United States)

    Van Hoof, Jo; Verschaffel, Lieven; Ghesquière, Pol; Van Dooren, Wim

    2017-12-01

    Previous research indicated that in several cases learners' errors on rational number tasks can be attributed to learners' tendency to (wrongly) apply natural number properties. There exists a large body of literature both on learners' struggle with understanding the rational number system and on the role of the natural number bias in this struggle. However, little is known about this phenomenon in learners with dyscalculia. We investigated the rational number understanding of learners with dyscalculia and compared it with the rational number understanding of learners without dyscalculia. Three groups of learners were included: sixth graders with dyscalculia, a chronological age match group, and an ability match group. The results showed that the rational number understanding of learners with dyscalculia is significantly lower than that of typically developing peers, but not significantly different from younger learners, even after statistically controlling for mathematics achievement. Next to a delay in their mathematics achievement, learners with dyscalculia seem to have an extra delay in their rational number understanding, compared with peers. This is especially the case in those rational number tasks where one has to inhibit natural number knowledge to come to the right answer. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    Directory of Open Access Journals (Sweden)

    Wu Jer-Yuarn

    2008-12-01

    Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.

  18. IEEE Standard for Floating Point Numbers

    Indian Academy of Sciences (India)

    IAS Admin

    Floating point numbers are an important data type in compu- tation which is used ... quite large! Integers are ... exp, the value of the exponent will be taken as (exp –127). The ..... bit which is truncated is 1, add 1 to the least significant bit, else.

  19. The MIXMAX random number generator

    Science.gov (United States)

    Savvidy, Konstantin G.

    2015-11-01

    In this paper, we study the randomness properties of unimodular matrix random number generators. Under well-known conditions, these discrete-time dynamical systems have the highly desirable K-mixing properties which guarantee high quality random numbers. It is found that some widely used random number generators have poor Kolmogorov entropy and consequently fail in empirical tests of randomness. These tests show that the lowest acceptable value of the Kolmogorov entropy is around 50. Next, we provide a solution to the problem of determining the maximal period of unimodular matrix generators of pseudo-random numbers. We formulate the necessary and sufficient condition to attain the maximum period and present a family of specific generators in the MIXMAX family with superior performance and excellent statistical properties. Finally, we construct three efficient algorithms for operations with the MIXMAX matrix which is a multi-dimensional generalization of the famous cat-map. First, allowing to compute the multiplication by the MIXMAX matrix with O(N) operations. Second, to recursively compute its characteristic polynomial with O(N2) operations, and third, to apply skips of large number of steps S to the sequence in O(N2 log(S)) operations.

  20. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  1. Talking probabilities: communicating probabilistic information with words and numbers

    NARCIS (Netherlands)

    Renooij, S.; Witteman, C.L.M.

    1999-01-01

    The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to

  2. Weak coupling large-N transitions at finite baryon density

    NARCIS (Netherlands)

    Hollowood, Timothy J.; Kumar, S. Prem; Myers, Joyce C.

    We study thermodynamics of free SU(N) gauge theory with a large number of colours and flavours on a three-sphere, in the presence of a baryon number chemical potential. Reducing the system to a holomorphic large-N matrix integral, paying specific attention to theories with scalar flavours (squarks),

  3. Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks

    Science.gov (United States)

    2012-01-01

    SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...disadvantages of ML- Chord are its fixed size (two layers), and limited scala - bility for large-scale systems. RC-Chord extends ML- D. Karrels et al...configurable before runtime. This can be improved by incorporating a distributed learning algorithm to tune the number and range of the DLoE tracking

  4. ‘Surprise’: Outbreak of Campylobacter infection associated with chicken liver pâté at a surprise birthday party, Adelaide, Australia, 2012

    OpenAIRE

    Emma Denehy; Amy Parry; Emily Fearnley

    2012-01-01

    Objective: In July 2012, an outbreak of Campylobacter infection was investigated by the South Australian Communicable Disease Control Branch and Food Policy and Programs Branch. The initial notification identified illness at a surprise birthday party held at a restaurant on 14 July 2012. The objective of the investigation was to identify the potential source of infection and institute appropriate intervention strategies to prevent further illness.Methods: A guest list was obtained and a retro...

  5. Formation of free round jets with long laminar regions at large Reynolds numbers

    Science.gov (United States)

    Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander

    2018-04-01

    The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.

  6. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.

  7. Talking probabilities: communicating probalistic information with words and numbers

    NARCIS (Netherlands)

    Renooij, S.; Witteman, C.L.M.

    1999-01-01

    The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to provide

  8. Experimental observation of pulsating instability under acoustic field in downward-propagating flames at large Lewis number

    KAUST Repository

    Yoon, Sung Hwan

    2017-10-12

    According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.

  9. Non-Dirac Chern insulators with large band gaps and spin-polarized edge states.

    Science.gov (United States)

    Xue, Y; Zhang, J Y; Zhao, B; Wei, X Y; Yang, Z Q

    2018-05-10

    Based on first-principles calculations and k·p models, we demonstrate that PbC/MnSe heterostructures are a non-Dirac type of Chern insulator with very large band gaps (244 meV) and exotically half-metallic edge states, providing the possibilities of realizing very robust, completely spin polarized, and dissipationless spintronic devices from the heterostructures. The achieved extraordinarily large nontrivial band gap can be ascribed to the contribution of the non-Dirac type electrons (composed of px and py) and the very strong atomic spin-orbit coupling (SOC) interaction of the heavy Pb element in the system. Surprisingly, the band structures are found to be sensitive to the different exchange and correlation functionals adopted in the first-principles calculations. Chern insulators with various mechanisms are acquired from them. These discoveries show that the predicted nontrivial topology in PbC/MnSe heterostructures is robust and can be observed in experiments at high temperatures. The system has great potential to have attractive applications in future spintronics.

  10. Investigation of bacterial transport in the large-block test, a thermally perturbed block of Topopah Spring Tuff

    International Nuclear Information System (INIS)

    Chen, C. I.; Chuu, Y. J.; Lin, W.; Meike, A.; Sawvel, A.

    1998-01-01

    This study investigates the transport of bacteria in a large, thermally perturbed block of Topopah Spring tuff. The study was part of the Large-Block Test (LBT), thermochemical and physical studies conducted on a 10 ft x 10 ft x 14 ft block of volcanic tuff excavated on 5 of 6 sides out of Fran Ridge, Nevada. Two bacterial species, Bacillus subtilis and Arthrobacter oxydans, were isolated from the Yucca Mountain tuff. Natural mutants that can grow under the simultaneous presence of the two antibiotics, streptomycin and rifampicin, were selected from these species by laboratory procedures. The double-drug-resistant mutants, which could be thus distinguished from the indigenous species, were injected into the five heater boreholes of the large block hours before heating was initiated. The temperature, as measured 5 cm above one of the heater boreholes, rose slowly and steadily over a matter of months to a maximum of 142 C. Samples (cotton cloths inserted the length of the hole, glass fiber swabs, and filter papers) were collected from the boreholes that were approximately 5 ft below the injection points. Double-drug-resistant bacteria were found in the collection boreholes nine months after injection. Surprisingly, they also appeared in the heater boreholes where the temperature had been sustainably high throughout the test. These bacteria appear to be the species that were injected. The number of double-drug-resistant bacteria that were identified in the collection boreholes increased with time. An apparent homogeneous distribution among the observation boreholes and heater boreholes suggests that a random motion could be the pattern that the bacteria migrated in the block. These observations indicated the possibility of rapid bacterial transport in a thermally perturbed geologic setting

  11. The Love of Large Numbers: A Popularity Bias in Consumer Choice.

    Science.gov (United States)

    Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J

    2017-10-01

    Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.

  12. Large Aperture "Photon Bucket" Optical Receiver Performance in High Background Environments

    Science.gov (United States)

    Vilnrotter, Victor A.; Hoppe, D.

    2011-01-01

    The potential development of large aperture groundbased "photon bucket" optical receivers for deep space communications, with acceptable performance even when pointing close to the sun, is receiving considerable attention. Sunlight scattered by the atmosphere becomes significant at micron wavelengths when pointing to a few degrees from the sun, even with the narrowest bandwidth optical filters. In addition, high quality optical apertures in the 10-30 meter range are costly and difficult to build with accurate surfaces to ensure narrow fields-of-view (FOV). One approach currently under consideration is to polish the aluminum reflector panels of large 34-meter microwave antennas to high reflectance, and accept the relatively large FOV generated by state-of-the-art polished aluminum panels with rms surface accuracies on the order of a few microns, corresponding to several-hundred micro-radian FOV, hence generating centimeter-diameter focused spots at the Cassegrain focus of 34-meter antennas. Assuming pulse-position modulation (PPM) and Poisson-distributed photon-counting detection, a "polished panel" photon-bucket receiver with large FOV will collect hundreds of background photons per PPM slot, along with comparable signal photons due to its large aperture. It is demonstrated that communications performance in terms of PPM symbol-error probability in high-background high-signal environments depends more strongly on signal than on background photons, implying that large increases in background energy can be compensated by a disproportionally small increase in signal energy. This surprising result suggests that large optical apertures with relatively poor surface quality may nevertheless provide acceptable performance for deep-space optical communications, potentially enabling the construction of cost-effective hybrid RF/optical receivers in the future.

  13. Reynolds-number dependence of turbulence enhancement on collision growth

    Directory of Open Access Journals (Sweden)

    R. Onishi

    2016-10-01

    Full Text Available This study investigates the Reynolds-number dependence of turbulence enhancement on the collision growth of cloud droplets. The Onishi turbulent coagulation kernel proposed in Onishi et al. (2015 is updated by using the direct numerical simulation (DNS results for the Taylor-microscale-based Reynolds number (Reλ up to 1140. The DNS results for particles with a small Stokes number (St show a consistent Reynolds-number dependence of the so-called clustering effect with the locality theory proposed by Onishi et al. (2015. It is confirmed that the present Onishi kernel is more robust for a wider St range and has better agreement with the Reynolds-number dependence shown by the DNS results. The present Onishi kernel is then compared with the Ayala–Wang kernel (Ayala et al., 2008a; Wang et al., 2008. At low and moderate Reynolds numbers, both kernels show similar values except for r2 ∼ r1, for which the Ayala–Wang kernel shows much larger values due to its large turbulence enhancement on collision efficiency. A large difference is observed for the Reynolds-number dependences between the two kernels. The Ayala–Wang kernel increases for the autoconversion region (r1, r2 < 40 µm and for the accretion region (r1 < 40 and r2 > 40 µm; r1 > 40 and r2 < 40 µm as Reλ increases. In contrast, the Onishi kernel decreases for the autoconversion region and increases for the rain–rain self-collection region (r1, r2 > 40 µm. Stochastic collision–coalescence equation (SCE simulations are also conducted to investigate the turbulence enhancement on particle size evolutions. The SCE with the Ayala–Wang kernel (SCE-Ayala and that with the present Onishi kernel (SCE-Onishi are compared with results from the Lagrangian Cloud Simulator (LCS; Onishi et al., 2015, which tracks individual particle motions and size evolutions in homogeneous isotropic turbulence. The SCE-Ayala and SCE-Onishi kernels show consistent

  14. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  15. Podocyte Number in Children and Adults: Associations with Glomerular Size and Numbers of Other Glomerular Resident Cells

    Science.gov (United States)

    Puelles, Victor G.; Douglas-Denton, Rebecca N.; Cullen-McEwen, Luise A.; Li, Jinhua; Hughson, Michael D.; Hoy, Wendy E.; Kerr, Peter G.

    2015-01-01

    Increases in glomerular size occur with normal body growth and in many pathologic conditions. In this study, we determined associations between glomerular size and numbers of glomerular resident cells, with a particular focus on podocytes. Kidneys from 16 male Caucasian-Americans without overt renal disease, including 4 children (≤3 years old) to define baseline values of early life and 12 adults (≥18 years old), were collected at autopsy in Jackson, Mississippi. We used a combination of immunohistochemistry, confocal microscopy, and design-based stereology to estimate individual glomerular volume (IGV) and numbers of podocytes, nonepithelial cells (NECs; tuft cells other than podocytes), and parietal epithelial cells (PECs). Podocyte density was calculated. Data are reported as medians and interquartile ranges (IQRs). Glomeruli from children were small and contained 452 podocytes (IQR=335–502), 389 NECs (IQR=265–498), and 146 PECs (IQR=111–206). Adult glomeruli contained significantly more cells than glomeruli from children, including 558 podocytes (IQR=431–746; P<0.01), 1383 NECs (IQR=998–2042; P<0.001), and 367 PECs (IQR=309–673; P<0.001). However, large adult glomeruli showed markedly lower podocyte density (183 podocytes per 106 µm3) than small glomeruli from adults and children (932 podocytes per 106 µm3; P<0.001). In conclusion, large adult glomeruli contained more podocytes than small glomeruli from children and adults, raising questions about the origin of these podocytes. The increased number of podocytes in large glomeruli does not match the increase in glomerular size observed in adults, resulting in relative podocyte depletion. This may render hypertrophic glomeruli susceptible to pathology. PMID:25568174

  16. A large scale survey reveals that chromosomal copy-number alterations significantly affect gene modules involved in cancer initiation and progression

    Directory of Open Access Journals (Sweden)

    Cigudosa Juan C

    2011-05-01

    Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.

  17. Population-genetic nature of copy number variations in the human genome.

    Science.gov (United States)

    Kato, Mamoru; Kawaguchi, Takahisa; Ishikawa, Shumpei; Umeda, Takayoshi; Nakamichi, Reiichiro; Shapero, Michael H; Jones, Keith W; Nakamura, Yusuke; Aburatani, Hiroyuki; Tsunoda, Tatsuhiko

    2010-03-01

    Copy number variations (CNVs) are universal genetic variations, and their association with disease has been increasingly recognized. We designed high-density microarrays for CNVs, and detected 3000-4000 CNVs (4-6% of the genomic sequence) per population that included CNVs previously missed because of smaller sizes and residing in segmental duplications. The patterns of CNVs across individuals were surprisingly simple at the kilo-base scale, suggesting the applicability of a simple genetic analysis for these genetic loci. We utilized the probabilistic theory to determine integer copy numbers of CNVs and employed a recently developed phasing tool to estimate the population frequencies of integer copy number alleles and CNV-SNP haplotypes. The results showed a tendency toward a lower frequency of CNV alleles and that most of our CNVs were explained only by zero-, one- and two-copy alleles. Using the estimated population frequencies, we found several CNV regions with exceptionally high population differentiation. Investigation of CNV-SNP linkage disequilibrium (LD) for 500-900 bi- and multi-allelic CNVs per population revealed that previous conflicting reports on bi-allelic LD were unexpectedly consistent and explained by an LD increase correlated with deletion-allele frequencies. Typically, the bi-allelic LD was lower than SNP-SNP LD, whereas the multi-allelic LD was somewhat stronger than the bi-allelic LD. After further investigation of tag SNPs for CNVs, we conclude that the customary tagging strategy for disease association studies can be applicable for common deletion CNVs, but direct interrogation is needed for other types of CNVs.

  18. Plume structure in high-Rayleigh-number convection

    Science.gov (United States)

    Puthenveettil, Baburaj A.; Arakeri, Jaywant H.

    2005-10-01

    Near-wall structures in turbulent natural convection at Rayleigh numbers of 10^{10} to 10^{11} at A Schmidt number of 602 are visualized by a new method of driving the convection across a fine membrane using concentration differences of sodium chloride. The visualizations show the near-wall flow to consist of sheet plumes. A wide variety of large-scale flow cells, scaling with the cross-section dimension, are observed. Multiple large-scale flow cells are seen at aspect ratio (AR)= 0.65, while only a single circulation cell is detected at AR= 0.435. The cells (or the mean wind) are driven by plumes coming together to form columns of rising lighter fluid. The wind in turn aligns the sheet plumes along the direction of shear. the mean wind direction is seen to change with time. The near-wall dynamics show plumes initiated at points, which elongate to form sheets and then merge. Increase in rayleigh number results in a larger number of closely and regularly spaced plumes. The plume spacings show a common log normal probability distribution function, independent of the rayleigh number and the aspect ratio. We propose that the near-wall structure is made of laminar natural-convection boundary layers, which become unstable to give rise to sheet plumes, and show that the predictions of a model constructed on this hypothesis match the experiments. Based on these findings, we conclude that in the presence of a mean wind, the local near-wall boundary layers associated with each sheet plume in high-rayleigh-number turbulent natural convection are likely to be laminar mixed convection type.

  19. Using Copy Number Alterations to Identify New Therapeutic Targets for Bladder Carcinoma

    Directory of Open Access Journals (Sweden)

    Donatella Conconi

    2016-02-01

    Full Text Available Bladder cancer represents the ninth most widespread malignancy throughout the world. It is characterized by the presence of two different clinical and prognostic subtypes: non-muscle-invasive bladder cancers (NMIBCs and muscle-invasive bladder cancers (MIBCs. MIBCs have a poor outcome with a common progression to metastasis. Despite improvements in knowledge, treatment has not advanced significantly in recent years, with the absence of new therapeutic targets. Because of the limitations of current therapeutic options, the greater challenge will be to identify biomarkers for clinical application. For this reason, we compared our array comparative genomic hybridization (array-CGH results with those reported in literature for invasive bladder tumors and, in particular, we focused on the evaluation of copy number alterations (CNAs present in biopsies and retained in the corresponding cancer stem cell (CSC subpopulations that should be the main target of therapy. According to our data, CCNE1, MYC, MDM2 and PPARG genes could be interesting therapeutic targets for bladder CSC subpopulations. Surprisingly, HER2 copy number gains are not retained in bladder CSCs, making the gene-targeted therapy less interesting than the others. These results provide precious advice for further study on bladder therapy; however, the clinical importance of these results should be explored.

  20. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  1. The transcriptome of Bathymodiolus azoricus gill reveals expression of genes from endosymbionts and free-living deep-sea bacteria.

    Science.gov (United States)

    Egas, Conceição; Pinheiro, Miguel; Gomes, Paula; Barroso, Cristina; Bettencourt, Raul

    2012-08-01

    Deep-sea environments are largely unexplored habitats where a surprising number of species may be found in large communities, thriving regardless of the darkness, extreme cold, and high pressure. Their unique geochemical features result in reducing environments rich in methane and sulfides, sustaining complex chemosynthetic ecosystems that represent one of the most surprising findings in oceans in the last 40 years. The deep-sea Lucky Strike hydrothermal vent field, located in the Mid Atlantic Ridge, is home to large vent mussel communities where Bathymodiolus azoricus represents the dominant faunal biomass, owing its survival to symbiotic associations with methylotrophic or methanotrophic and thiotrophic bacteria. The recent transcriptome sequencing and analysis of gill tissues from B. azoricus revealed a number of genes of bacterial origin, hereby analyzed to provide a functional insight into the gill microbial community. The transcripts supported a metabolically active microbiome and a variety of mechanisms and pathways, evidencing also the sulfur and methane metabolisms. Taxonomic affiliation of transcripts and 16S rRNA community profiling revealed a microbial community dominated by thiotrophic and methanotrophic endosymbionts of B. azoricus and the presence of a Sulfurovum-like epsilonbacterium.

  2. The Transcriptome of Bathymodiolus azoricus Gill Reveals Expression of Genes from Endosymbionts and Free-Living Deep-Sea Bacteria

    Directory of Open Access Journals (Sweden)

    Raul Bettencourt

    2012-08-01

    Full Text Available Deep-sea environments are largely unexplored habitats where a surprising number of species may be found in large communities, thriving regardless of the darkness, extreme cold, and high pressure. Their unique geochemical features result in reducing environments rich in methane and sulfides, sustaining complex chemosynthetic ecosystems that represent one of the most surprising findings in oceans in the last 40 years. The deep-sea Lucky Strike hydrothermal vent field, located in the Mid Atlantic Ridge, is home to large vent mussel communities where Bathymodiolus azoricus represents the dominant faunal biomass, owing its survival to symbiotic associations with methylotrophic or methanotrophic and thiotrophic bacteria. The recent transcriptome sequencing and analysis of gill tissues from B. azoricus revealed a number of genes of bacterial origin, hereby analyzed to provide a functional insight into the gill microbial community. The transcripts supported a metabolically active microbiome and a variety of mechanisms and pathways, evidencing also the sulfur and methane metabolisms. Taxonomic affiliation of transcripts and 16S rRNA community profiling revealed a microbial community dominated by thiotrophic and methanotrophic endosymbionts of B. azoricus and the presence of a Sulfurovum-like epsilonbacterium.

  3. Beyond left and right: Automaticity and flexibility of number-space associations.

    Science.gov (United States)

    Antoine, Sophie; Gevers, Wim

    2016-02-01

    Close links exist between the processing of numbers and the processing of space: relatively small numbers are preferentially associated with a left-sided response while relatively large numbers are associated with a right-sided response (the SNARC effect). Previous work demonstrated that the SNARC effect is triggered in an automatic manner and is highly flexible. Besides the left-right dimension, numbers associate with other spatial response mappings such as close/far responses, where small numbers are associated with a close response and large numbers with a far response. In two experiments we investigate the nature of this association. Associations between magnitude and close/far responses were observed using a magnitude-irrelevant task (Experiment 1: automaticity) and using a variable referent task (Experiment 2: flexibility). While drawing a strong parallel between both response mappings, the present results are also informative with regard to the question about what type of processing mechanism underlies both the SNARC effect and the association between numerical magnitude and close/far response locations.

  4. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    Science.gov (United States)

    2017-03-01

    shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in

  5. [Fall from height--surprising autopsy diagnosis in primarily unclear initial situations].

    Science.gov (United States)

    Schyma, Christian; Doberentz, Elke; Madea, Burkhard

    2012-01-01

    External post-mortem examination and first police assessments are often not consistent with subsequent autopsy results. This is all the more surprising the more serious the injuries found at autopsy are. Such discrepancies result especially from an absence of gross external injuries, as demonstrated by four examples. A 42-year-old, externally uninjured male was found at night time in a helpless condition in the street and died in spite of resuscitation. Autopsy showed severe polytrauma with traumatic brain injury and lesions of the thoracic and abdominal organs. A jump from the third floor was identified as the cause. At dawn, a twenty-year-old male was found dead on the grounds of the adjacent house. Because of the blood-covered head the police assumed a traumatic head injury by strike impact. The external examination revealed only abrasions on the forehead and to a minor extent on the back. At autopsy a midfacial fracture, a trauma of the thorax and abdomen and fractures of the spine and pelvis were detected. Afterwards investigations showed that the man, intoxicated by alcohol, had fallen from the flat roof of a multistoried house. A 77-year-old man was found unconscious on his terrace at day time; a cerebral seizure was assumed. He was transferred to emergency care where he died. The corpse was externally inconspicuous. Autopsy revealed serious traumatic injuries of the brain, thorax, abdomen and pelvis, which could be explained by a fall from the balcony. A 47-year-old homeless person without any external injuries was found dead in a barn. An alcohol intoxication was assumed. At autopsy severe injuries of the brain and cervical spine were found which were the result of a fall from a height of 5 m. On the basis of an external post-mortem examination alone gross blunt force trauma cannot be reliably excluded.

  6. Energy transfers in dynamos with small magnetic Prandtl numbers

    KAUST Repository

    Kumar, Rohit

    2015-06-25

    We perform numerical simulation of dynamo with magnetic Prandtl number Pm = 0.2 on 10243 grid, and compute the energy fluxes and the shell-to-shell energy transfers. These computations indicate that the magnetic energy growth takes place mainly due to the energy transfers from large-scale velocity field to large-scale magnetic field and that the magnetic energy flux is forward. The steady-state magnetic energy is much smaller than the kinetic energy, rather than equipartition; this is because the magnetic Reynolds number is near the dynamo transition regime. We also contrast our results with those for dynamo with Pm = 20 and decaying dynamo. © 2015 Taylor & Francis.

  7. The future of large old trees in urban landscapes.

    Science.gov (United States)

    Le Roux, Darren S; Ikin, Karen; Lindenmayer, David B; Manning, Adrian D; Gibbons, Philip

    2014-01-01

    Large old trees are disproportionate providers of structural elements (e.g. hollows, coarse woody debris), which are crucial habitat resources for many species. The decline of large old trees in modified landscapes is of global conservation concern. Once large old trees are removed, they are difficult to replace in the short term due to typically prolonged time periods needed for trees to mature (i.e. centuries). Few studies have investigated the decline of large old trees in urban landscapes. Using a simulation model, we predicted the future availability of native hollow-bearing trees (a surrogate for large old trees) in an expanding city in southeastern Australia. In urban greenspace, we predicted that the number of hollow-bearing trees is likely to decline by 87% over 300 years under existing management practices. Under a worst case scenario, hollow-bearing trees may be completely lost within 115 years. Conversely, we predicted that the number of hollow-bearing trees will likely remain stable in semi-natural nature reserves. Sensitivity analysis revealed that the number of hollow-bearing trees perpetuated in urban greenspace over the long term is most sensitive to the: (1) maximum standing life of trees; (2) number of regenerating seedlings ha(-1); and (3) rate of hollow formation. We tested the efficacy of alternative urban management strategies and found that the only way to arrest the decline of large old trees requires a collective management strategy that ensures: (1) trees remain standing for at least 40% longer than currently tolerated lifespans; (2) the number of seedlings established is increased by at least 60%; and (3) the formation of habitat structures provided by large old trees is accelerated by at least 30% (e.g. artificial structures) to compensate for short term deficits in habitat resources. Immediate implementation of these recommendations is needed to avert long term risk to urban biodiversity.

  8. The morphodynamics and sedimentology of large river confluences

    Science.gov (United States)

    Nicholas, Andrew; Sambrook Smith, Greg; Best, James; Bull, Jon; Dixon, Simon; Goodbred, Steven; Sarker, Mamin; Vardy, Mark

    2017-04-01

    Confluences are key locations within large river networks, yet surprisingly little is known about how they migrate and evolve through time. Moreover, because confluence sites are associated with scour pools that are typically several times the mean channel depth, the deposits associated with such scours should have a high potential for preservation within the rock record. However, paradoxically, such scours are rarely observed, and the sedimentological characteristics of such deposits are poorly understood. This study reports results from a physically-based morphodynamic model, which is applied to simulate the evolution and resulting alluvial architecture associated with large river junctions. Boundary conditions within the model simulation are defined to approximate the junction of the Ganges and Jamuna rivers, in Bangladesh. Model results are supplemented by geophysical datasets collected during boat-based surveys at this junction. Simulated deposit characteristics and geophysical datasets are compared with three existing and contrasting conceptual models that have been proposed to represent the sedimentary architecture of confluence scours. Results illustrate that existing conceptual models may be overly simplistic, although elements of each of the three conceptual models are evident in the deposits generated by the numerical simulation. The latter are characterised by several distinct styles of sedimentary fill, which can be linked to particular morphodynamic behaviours. However, the preserved characteristics of simulated confluence deposits vary substantial according to the degree of reworking by channel migration. This may go some way towards explaining the confluence scour paradox; while abundant large scours might be expected in the rock record, they are rarely reported.

  9. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent

  10. Act on Numbers: Numerical Magnitude Influences Selection and Kinematics of Finger Movement

    Directory of Open Access Journals (Sweden)

    Rosa Rugani

    2017-08-01

    Full Text Available In the past decade hand kinematics has been reliably adopted for investigating cognitive processes and disentangling debated topics. One of the most controversial issues in numerical cognition literature regards the origin – cultural vs. genetically driven – of the mental number line (MNL, oriented from left (small numbers to right (large numbers. To date, the majority of studies have investigated this effect by means of response times, whereas studies considering more culturally unbiased measures such as kinematic parameters are rare. Here, we present a new paradigm that combines a “free response” task with the kinematic analysis of movement. Participants were seated in front of two little soccer goals placed on a table, one on the left and one on the right side. They were presented with left- or right-directed arrows and they were instructed to kick a small ball with their right index toward the goal indicated by the arrow. In a few test trials participants were presented also with a small (2 or a large (8 number, and they were allowed to choose the kicking direction. Participants performed more left responses with the small number and more right responses with the large number. The whole kicking movement was segmented in two temporal phases in order to make a hand kinematics’ fine-grained analysis. The Kick Preparation and Kick Finalization phases were selected on the basis of peak trajectory deviation from the virtual midline between the two goals. Results show an effect of both small and large numbers on action execution timing. Participants were faster to finalize the action when responding to small numbers toward the left and to large number toward the right. Here, we provide the first experimental demonstration which highlights how numerical processing affects action execution in a new and not-overlearned context. The employment of this innovative and unbiased paradigm will permit to disentangle the role of nature and culture

  11. The genome of Pelobacter carbinolicus reveals surprising metabolic capabilities and physiological features

    Energy Technology Data Exchange (ETDEWEB)

    Aklujkar, Muktak [University of Massachusetts, Amherst; Haveman, Shelley [University of Massachusetts, Amherst; DiDonatoJr, Raymond [University of Massachusetts, Amherst; Chertkov, Olga [Los Alamos National Laboratory (LANL); Han, Cliff [Los Alamos National Laboratory (LANL); Land, Miriam L [ORNL; Brown, Peter [University of Massachusetts, Amherst; Lovley, Derek [University of Massachusetts, Amherst

    2012-01-01

    Background: The bacterium Pelobacter carbinolicus is able to grow by fermentation, syntrophic hydrogen/formate transfer, or electron transfer to sulfur from short-chain alcohols, hydrogen or formate; it does not oxidize acetate and is not known to ferment any sugars or grow autotrophically. The genome of P. carbinolicus was sequenced in order to understand its metabolic capabilities and physiological features in comparison with its relatives, acetate-oxidizing Geobacter species. Results: Pathways were predicted for catabolism of known substrates: 2,3-butanediol, acetoin, glycerol, 1,2-ethanediol, ethanolamine, choline and ethanol. Multiple isozymes of 2,3-butanediol dehydrogenase, ATP synthase and [FeFe]-hydrogenase were differentiated and assigned roles according to their structural properties and genomic contexts. The absence of asparagine synthetase and the presence of a mutant tRNA for asparagine encoded among RNA-active enzymes suggest that P. carbinolicus may make asparaginyl-tRNA in a novel way. Catabolic glutamate dehydrogenases were discovered, implying that the tricarboxylic acid (TCA) cycle can function catabolically. A phosphotransferase system for uptake of sugars was discovered, along with enzymes that function in 2,3-butanediol production. Pyruvate: ferredoxin/flavodoxin oxidoreductase was identified as a potential bottleneck in both the supply of oxaloacetate for oxidation of acetate by the TCA cycle and the connection of glycolysis to production of ethanol. The P. carbinolicus genome was found to encode autotransporters and various appendages, including three proteins with similarity to the geopilin of electroconductive nanowires. Conclusions: Several surprising metabolic capabilities and physiological features were predicted from the genome of P. carbinolicus, suggesting that it is more versatile than anticipated.

  12. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Science.gov (United States)

    Minaudo, Camille; Curie, Florence; Jullian, Yann; Gassama, Nathalie; Moatar, Florentina

    2018-04-01

    To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET) was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P) availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  13. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Directory of Open Access Journals (Sweden)

    C. Minaudo

    2018-04-01

    Full Text Available To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  14. Polynomial selection in number field sieve for integer factorization

    Directory of Open Access Journals (Sweden)

    Gireesh Pandey

    2016-09-01

    Full Text Available The general number field sieve (GNFS is the fastest algorithm for factoring large composite integers which is made up by two prime numbers. Polynomial selection is an important step of GNFS. The asymptotic runtime depends on choice of good polynomial pairs. In this paper, we present polynomial selection algorithm that will be modelled with size and root properties. The correlations between polynomial coefficient and number of relations have been explored with experimental findings.

  15. Effective field theories in the large-N limit

    International Nuclear Information System (INIS)

    Weinberg, S.

    1997-01-01

    Various effective field theories in four dimensions are shown to have exact nontrivial solutions in the limit as the number N of fields of some type becomes large. These include extended versions of the U (N) Gross-Neveu model, the nonlinear O(N) σ model, and the CP N-1 model. Although these models are not renormalizable in the usual sense, the infinite number of coupling types allows a complete cancellation of infinities. These models provide qualitative predictions of the form of scattering amplitudes for arbitrary momenta, but because of the infinite number of free parameters, it is possible to derive quantitative predictions only in the limit of small momenta. For small momenta the large-N limit provides only a modest simplification, removing at most a finite number of diagrams to each order in momenta, except near phase transitions, where it reduces the infinite number of diagrams that contribute for low momenta to a finite number. copyright 1997 The American Physical Society

  16. Distribution of squares modulo a composite number

    OpenAIRE

    Aryan, Farzad

    2015-01-01

    In this paper we study the distribution of squares modulo a square-free number $q$. We also look at inverse questions for the large sieve in the distribution aspect and we make improvements on existing results on the distribution of $s$-tuples of reduced residues.

  17. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    Science.gov (United States)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.

  18. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  19. Large boson number IBM calculations and their relationship to the Bohr model

    International Nuclear Information System (INIS)

    Thiamova, G.; Rowe, D.J.

    2009-01-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)

  20. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  1. YBYRÁ facilitates comparison of large phylogenetic trees.

    Science.gov (United States)

    Machado, Denis Jacob

    2015-07-01

    The number and size of tree topologies that are being compared by phylogenetic systematists is increasing due to technological advancements in high-throughput DNA sequencing. However, we still lack tools to facilitate comparison among phylogenetic trees with a large number of terminals. The "YBYRÁ" project integrates software solutions for data analysis in phylogenetics. It comprises tools for (1) topological distance calculation based on the number of shared splits or clades, (2) sensitivity analysis and automatic generation of sensitivity plots and (3) clade diagnoses based on different categories of synapomorphies. YBYRÁ also provides (4) an original framework to facilitate the search for potential rogue taxa based on how much they affect average matching split distances (using MSdist). YBYRÁ facilitates comparison of large phylogenetic trees and outperforms competing software in terms of usability and time efficiency, specially for large data sets. The programs that comprises this toolkit are written in Python, hence they do not require installation and have minimum dependencies. The entire project is available under an open-source licence at http://www.ib.usp.br/grant/anfibios/researchSoftware.html .

  2. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  3. Human behaviour can trigger large carnivore attacks in developed countries.

    Science.gov (United States)

    Penteriani, Vincenzo; Delgado, María del Mar; Pinchera, Francesco; Naves, Javier; Fernández-Gil, Alberto; Kojola, Ilpo; Härkönen, Sauli; Norberg, Harri; Frank, Jens; Fedriani, José María; Sahlén, Veronica; Støen, Ole-Gunnar; Swenson, Jon E; Wabakken, Petter; Pellegrini, Mario; Herrero, Stephen; López-Bao, José Vicente

    2016-02-03

    The media and scientific literature are increasingly reporting an escalation of large carnivore attacks on humans in North America and Europe. Although rare compared to human fatalities by other wildlife, the media often overplay large carnivore attacks on humans, causing increased fear and negative attitudes towards coexisting with and conserving these species. Although large carnivore populations are generally increasing in developed countries, increased numbers are not solely responsible for the observed rise in the number of attacks by large carnivores. Here we show that an increasing number of people are involved in outdoor activities and, when doing so, some people engage in risk-enhancing behaviour that can increase the probability of a risky encounter and a potential attack. About half of the well-documented reported attacks have involved risk-enhancing human behaviours, the most common of which is leaving children unattended. Our study provides unique insight into the causes, and as a result the prevention, of large carnivore attacks on people. Prevention and information that can encourage appropriate human behaviour when sharing the landscape with large carnivores are of paramount importance to reduce both potentially fatal human-carnivore encounters and their consequences to large carnivores.

  4. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...

  5. Introduction: Scaling and structure in high Reynolds number wall-bounded flows

    International Nuclear Information System (INIS)

    McKeon, B.J.; Sreenivasan, K.R.

    2007-05-01

    The papers discussed in this report are dealing with the following aspects: Fundamental scaling relations for canonical flows and asymptotic approach to infinite Reynolds numbers; large and very large scales in near-wall turbulences; the influence of roughness and finite Reynolds number effects; comparison between internal and external flows and the universality of the near-wall region; qualitative and quantitative models of the turbulent boundary layer; the neutrally stable atmospheric surface layer as a model for a canonical zero-pressure-gradient boundary layer (author)

  6. The effects of large beach debris on nesting sea turtles

    Science.gov (United States)

    Fujisaki, Ikuko; Lamont, Margaret M.

    2016-01-01

    A field experiment was conducted to understand the effects of large beach debris on sea turtle nesting behavior as well as the effectiveness of large debris removal for habitat restoration. Large natural and anthropogenic debris were removed from one of three sections of a sea turtle nesting beach and distributions of nests and false crawls (non-nesting crawls) in pre- (2011–2012) and post- (2013–2014) removal years in the three sections were compared. The number of nests increased 200% and the number of false crawls increased 55% in the experimental section, whereas a corresponding increase in number of nests and false crawls was not observed in the other two sections where debris removal was not conducted. The proportion of nest and false crawl abundance in all three beach sections was significantly different between pre- and post-removal years. The nesting success, the percent of successful nests in total nesting attempts (number of nests + false crawls), also increased from 24% to 38%; however the magnitude of the increase was comparably small because both the number of nests and false crawls increased, and thus the proportion of the nesting success in the experimental beach in pre- and post-removal years was not significantly different. The substantial increase in sea turtle nesting activities after the removal of large debris indicates that large debris may have an adverse impact on sea turtle nesting behavior. Removal of large debris could be an effective restoration strategy to improve sea turtle nesting.

  7. Hot-ion Bernstein wave with large kparallel

    International Nuclear Information System (INIS)

    Ignat, D.W.; Ono, M.

    1995-01-01

    The complex roots of the hot plasma dispersion relation in the ion cyclotron range of frequencies have been surveyed. Progressing from low to high values of perpendicular wave number k perpendicular we find first the cold plasma fast wave and then the well-known Bernstein wave, which is characterized by large dispersion, or large changes in k perpendicular for small changes in frequency or magnetic field. At still higher k perpendicular there can be two hot plasma waves with relatively little dispersion. The latter waves exist only for relatively large k parallel, the wave number parallel to the magnetic field, and are strongly damped unless the electron temperature is low compared to the ion temperature. Up to three mode conversions appear to be possible, but two mode conversions are seen consistently

  8. The number of expats is rather stable

    DEFF Research Database (Denmark)

    Andersen, Torben

    2008-01-01

    aggregate data from the Danish economist’s and the engineer’s trade unions show that during the last decade there has been stagnation in the number of expatriates. Taking into consideration that the three trade unions cover the very large majority of Danish knowledge workers occupying foreign jobs...

  9. Radical Software. Number Two. The Electromagnetic Spectrum.

    Science.gov (United States)

    Korot, Beryl, Ed.; Gershuny, Phyllis, Ed.

    1970-01-01

    In an effort to foster the innovative uses of television technology, this tabloid format periodical details social, educational, and artistic experiments with television and lists a large number of experimental videotapes available from various television-centered groups and individuals. The principal areas explored in this issue include cable…

  10. On the binary expansions of algebraic numbers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Pomerance, Carl

    2003-07-01

    Employing concepts from additive number theory, together with results on binary evaluations and partial series, we establish bounds on the density of 1's in the binary expansions of real algebraic numbers. A central result is that if a real y has algebraic degree D > 1, then the number {number_sign}(|y|, N) of 1-bits in the expansion of |y| through bit position N satisfies {number_sign}(|y|, N) > CN{sup 1/D} for a positive number C (depending on y) and sufficiently large N. This in itself establishes the transcendency of a class of reals {summation}{sub n{ge}0} 1/2{sup f(n)} where the integer-valued function f grows sufficiently fast; say, faster than any fixed power of n. By these methods we re-establish the transcendency of the Kempner--Mahler number {summation}{sub n{ge}0}1/2{sup 2{sup n}}, yet we can also handle numbers with a substantially denser occurrence of 1's. Though the number z = {summation}{sub n{ge}0}1/2{sup n{sup 2}} has too high a 1's density for application of our central result, we are able to invoke some rather intricate number-theoretical analysis and extended computations to reveal aspects of the binary structure of z{sup 2}.

  11. Interaction of Number Magnitude and Auditory Localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Jungilligens, Johannes; Getzmann, Stephan

    2016-01-01

    The interplay of perception and memory is very evident when we perceive and then recognize familiar stimuli. Conversely, information in long-term memory may also influence how a stimulus is perceived. Prior work on number cognition in the visual modality has shown that in Western number systems long-term memory for the magnitude of smaller numbers can influence performance involving the left side of space, while larger numbers have an influence toward the right. Here, we investigated in the auditory modality whether a related effect may bias the perception of sound location. Subjects (n = 28) used a swivel pointer to localize noise bursts presented from various azimuth positions. The noise bursts were preceded by a spoken number (1-9) or, as a nonsemantic control condition, numbers that were played in reverse. The relative constant error in noise localization (forward minus reversed speech) indicated a systematic shift in localization toward more central locations when the number was smaller and toward more peripheral positions when the preceding number magnitude was larger. These findings do not support the traditional left-right number mapping. Instead, the results may reflect an overlap between codes for number magnitude and codes for sound location as implemented by two channel models of sound localization, or possibly a categorical mapping stage of small versus large magnitudes. © The Author(s) 2015.

  12. Large-scale behaviour of local and entanglement entropy of the free Fermi gas at any temperature

    Science.gov (United States)

    Leschke, Hajo; Sobolev, Alexander V.; Spitzer, Wolfgang

    2016-07-01

    The leading asymptotic large-scale behaviour of the spatially bipartite entanglement entropy (EE) of the free Fermi gas infinitely extended in multidimensional Euclidean space at zero absolute temperature, T = 0, is by now well understood. Here, we present and discuss the first rigorous results for the corresponding EE of thermal equilibrium states at T> 0. The leading large-scale term of this thermal EE turns out to be twice the first-order finite-size correction to the infinite-volume thermal entropy (density). Not surprisingly, this correction is just the thermal entropy on the interface of the bipartition. However, it is given by a rather complicated integral derived from a semiclassical trace formula for a certain operator on the underlying one-particle Hilbert space. But in the zero-temperature limit T\\downarrow 0, the leading large-scale term of the thermal EE considerably simplifies and displays a {ln}(1/T)-singularity which one may identify with the known logarithmic enhancement at T = 0 of the so-called area-law scaling. birthday of the ideal Fermi gas.

  13. The large-s field-reversed configuration experiment

    International Nuclear Information System (INIS)

    Hoffman, A.L.; Carey, L.N.; Crawford, E.A.; Harding, D.G.; DeHart, T.E.; McDonald, K.F.; McNeil, J.L.; Milroy, R.D.; Slough, J.T.; Maqueda, R.; Wurden, G.A.

    1993-01-01

    The Large-s Experiment (LSX) was built to study the formation and equilibrium properties of field-reversed configurations (FRCs) as the scale size increases. The dynamic, field-reversed theta-pinch method of FRC creation produces axial and azimuthal deformations and makes formation difficult, especially in large devices with large s (number of internal gyroradii) where it is difficult to achieve initial plasma uniformity. However, with the proper technique, these formation distortions can be minimized and are then observed to decay with time. This suggests that the basic stability and robustness of FRCs formed, and in some cases translated, in smaller devices may also characterize larger FRCs. Elaborate formation controls were included on LSX to provide the initial uniformity and symmetry necessary to minimize formation disturbances, and stable FRCs could be formed up to the design goal of s = 8. For x ≤ 4, the formation distortions decayed away completely, resulting in symmetric equilibrium FRCs with record confinement times up to 0.5 ms, agreeing with previous empirical scaling laws (τ∝sR). Above s = 4, reasonably long-lived (up to 0.3 ms) configurations could still be formed, but the initial formation distortions were so large that they never completely decayed away, and the equilibrium confinement was degraded from the empirical expectations. The LSX was only operational for 1 yr, and it is not known whether s = 4 represents a fundamental limit for good confinement in simple (no ion beam stabilization) FRCs or whether it simply reflects a limit of present formation technology. Ideally, s could be increased through flux buildup from neutral beams. Since the addition of kinetic or beam ions will probably be desirable for heating, sustainment, and further stabilization of magnetohydrodynamic modes at reactor-level s values, neutral beam injection is the next logical step in FRC development. 24 refs., 21 figs., 2 tabs

  14. The Christmas list

    CERN Multimedia

    James Gillies

    2010-01-01

    List making seems to be among mankind’s favourite activities, particularly as the old year draws to a close and the new one begins. It seems that we all want to know what the top 100 annoying pop songs are, who are the world’s most embarrassing people and what everyone’s been watching on TV. The transition from 2009 to 2010 was no different, but some of the latest batch of lists have a few surprising entries. According to the Global Language Monitor, ‘twitter’ was the top word of 2009. No surprises there, but ‘hadron’ came in at number 8 on the list. ‘King of pop’ was top phrase, according to the same source, but ‘god particle’ came in at number 10. And while ‘Barack Obama’ was the name of the year, ‘Large Hadron Collider’ came in at number four. The Global Language Monitor was not the only organization whose lists included particle physics references. &ls...

  15. GRIP LANGLEY AEROSOL RESEARCH GROUP EXPERIMENT (LARGE) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Langley Aerosol Research Group Experiment (LARGE) measures ultrafine aerosol number density, total and non-volatile aerosol number density, dry aerosol size...

  16. Set size and culture influence children's attention to number.

    Science.gov (United States)

    Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B

    2015-03-01

    Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Number-conserving random phase approximation with analytically integrated matrix elements

    International Nuclear Information System (INIS)

    Kyotoku, M.; Schmid, K.W.; Gruemmer, F.; Faessler, A.

    1990-01-01

    In the present paper a number conserving random phase approximation is derived as a special case of the recently developed random phase approximation in general symmetry projected quasiparticle mean fields. All the occurring integrals induced by the number projection are performed analytically after writing the various overlap and energy matrices in the random phase approximation equation as polynomials in the gauge angle. In the limit of a large number of particles the well-known pairing vibration matrix elements are recovered. We also present a new analytically number projected variational equation for the number conserving pairing problem

  18. Gaming the Law of Large Numbers

    Science.gov (United States)

    Hoffman, Thomas R.; Snapp, Bart

    2012-01-01

    Many view mathematics as a rich and wonderfully elaborate game. In turn, games can be used to illustrate mathematical ideas. Fibber's Dice, an adaptation of the game Liar's Dice, is a fast-paced game that rewards gutsy moves and favors the underdog. It also brings to life concepts arising in the study of probability. In particular, Fibber's Dice…

  19. Prospecting direction and favourable target areas for exploration of large and super-large uranium deposits in China

    International Nuclear Information System (INIS)

    Liu Xingzhong

    1993-01-01

    A host of large uranium deposits have been successively discovered abroad by means of geological exploration, metallogenetic model studies and the application of new geophysical and geochemical methods since 1970's. Thorough undertaking geological research relevant to prospecting for super large uranium deposits have attracted great attention of the worldwide geological circle. The important task for the vast numbers of uranium geological workers is to make an afford to discover more numerous large and super large uranium deposits in China. The author comprehensively analyses the regional geological setting and geological metallogenetic conditions for the super large uranium deposits in the world. Comparative studies have been undertaken and the prospecting direction and favourable target areas for the exploration of super large uranium deposits in China have been proposed

  20. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  1. Effect of Temperature Shock and Inventory Surprises on Natural Gas and Heating Oil Futures Returns

    Science.gov (United States)

    Hu, John Wei-Shan; Lin, Chien-Yu

    2014-01-01

    The aim of this paper is to examine the impact of temperature shock on both near-month and far-month natural gas and heating oil futures returns by extending the weather and storage models of the previous study. Several notable findings from the empirical studies are presented. First, the expected temperature shock significantly and positively affects both the near-month and far-month natural gas and heating oil futures returns. Next, significant temperature shock has effect on both the conditional mean and volatility of natural gas and heating oil prices. The results indicate that expected inventory surprises significantly and negatively affects the far-month natural gas futures returns. Moreover, volatility of natural gas futures returns is higher on Thursdays and that of near-month heating oil futures returns is higher on Wednesdays than other days. Finally, it is found that storage announcement for natural gas significantly affects near-month and far-month natural gas futures returns. Furthermore, both natural gas and heating oil futures returns are affected more by the weighted average temperature reported by multiple weather reporting stations than that reported by a single weather reporting station. PMID:25133233

  2. Random numbers spring from alpha decay

    International Nuclear Information System (INIS)

    Frigerio, N.A.; Sanathanan, L.P.; Morley, M.; Clark, N.A.; Tyler, S.A.

    1980-05-01

    Congruential random number generators, which are widely used in Monte Carlo simulations, are deficient in that the number they generate are concentrated in a relatively small number of hyperplanes. While this deficiency may not be a limitation in small Monte Carlo studies involving a few variables, it introduces a significant bias in large simulations requiring high resolution. This bias was recognized and assessed during preparations for an accident analysis study of nuclear power plants. This report describes a random number device based on the radioactive decay of alpha particles from a 235 U source in a high-resolution gas proportional counter. The signals were fed to a 4096-channel analyzer and for each channel the frequency of signals registered in a 20,000-microsecond interval was recorded. The parity bits of these frequency counts (0 for an even count and 1 for an odd count) were then assembled in sequence to form 31-bit binary random numbers and transcribed to a magnetic tape. This cycle was repeated as many times as were necessary to create 3 million random numbers. The frequency distribution of counts from the present device conforms to the Brockwell-Moyal distribution, which takes into account the dead time of the counter (both the dead time and decay constant of the underlying Poisson process were estimated). Analysis of the count data and tests of randomness on a sample set of the 31-bit binary numbers indicate that this random number device is a highly reliable source of truly random numbers. Its use is, therefore, recommended in Monte Carlo simulations for which the congruential pseudorandom number generators are found to be inadequate. 6 figures, 5 tables

  3. Preschool acuity of the approximate number system correlates with school math ability.

    Science.gov (United States)

    Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin

    2011-11-01

    Previous research shows a correlation between individual differences in people's school math abilities and the accuracy with which they rapidly and nonverbally approximate how many items are in a scene. This finding is surprising because the Approximate Number System (ANS) underlying numerical estimation is shared with infants and with non-human animals who never acquire formal mathematics. However, it remains unclear whether the link between individual differences in math ability and the ANS depends on formal mathematics instruction. Earlier studies demonstrating this link tested participants only after they had received many years of mathematics education, or assessed participants' ANS acuity using tasks that required additional symbolic or arithmetic processing similar to that required in standardized math tests. To ask whether the ANS and math ability are linked early in life, we measured the ANS acuity of 200 3- to 5-year-old children using a task that did not also require symbol use or arithmetic calculation. We also measured children's math ability and vocabulary size prior to the onset of formal math instruction. We found that children's ANS acuity correlated with their math ability, even when age and verbal skills were controlled for. These findings provide evidence for a relationship between the primitive sense of number and math ability starting early in life. 2011 Blackwell Publishing Ltd.

  4. Fabrication of large-scale one-dimensional Au nanochain and nanowire networks by interfacial self-assembly

    International Nuclear Information System (INIS)

    Wang Minhua; Li Yongjun; Xie Zhaoxiong; Liu Cai; Yeung, Edward S.

    2010-01-01

    By utilizing the strong capillary attraction between interfacial nanoparticles, large-scale one-dimensional Au nanochain networks were fabricated at the n-butanol/water interface, and could be conveniently transferred onto hydrophilic substrates. Furthermore, the length of the nanochains could be adjusted simply by controlling the density of Au nanoparticles (AuNPs) at the n-butanol/water interface. Surprisingly, the resultant Au nanochains could further transform into smooth nanowires by increasing the aging time, forming a nanowire network. Combined characterization by HRTEM and UV-vis spectroscopy indicates that the formation of Au nanochains stemmed from a stochastic assembly of interfacial AuNPs due to strong capillary attraction, and the evolution of nanochains to nanowires follows an Ostwald ripening mechanism rather than an oriented attachment. This method could be utilized to fabricate large-area nanochain or nanowire networks more uniformly on solid substrates than that of evaporating a solution of nanochain colloid, since it eliminates the three-dimensional aggregation behavior.

  5. The SNARC effect in two dimensions: Evidence for a frontoparallel mental number plane.

    Science.gov (United States)

    Hesse, Philipp Nikolaus; Bremmer, Frank

    2017-01-01

    The existence of an association between numbers and space is known for a long time. The most prominent demonstration of this relationship is the spatial numerical association of response codes (SNARC) effect, describing the fact that participants' reaction times are shorter with the left hand for small numbers and with the right hand for large numbers, when being asked to judge the parity of a number (Dehaene et al., J. Exp. Psychol., 122, 371-396, 1993). The SNARC effect is commonly seen as support for the concept of a mental number line, i.e. a mentally conceived line where small numbers are represented more on the left and large numbers are represented more on the right. The SNARC effect has been demonstrated for all three cardinal axes and recently a transverse SNARC plane has been reported (Chen et al., Exp. Brain Res., 233(5), 1519-1528, 2015). Here, by employing saccadic responses induced by auditory or visual stimuli, we measured the SNARC effect within the same subjects along the horizontal (HM) and vertical meridian (VM) and along the two interspersed diagonals. We found a SNARC effect along HM and VM, which allowed predicting the occurrence of a SNARC effect along the two diagonals by means of linear regression. Importantly, significant differences in SNARC strength were found between modalities. Our results suggest the existence of a frontoparallel mental number plane, where small numbers are represented left and down, while large numbers are represented right and up. Together with the recently described transverse mental number plane our findings provide further evidence for the existence of a three-dimensional mental number space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Translations on USSR Military Affairs, Number 1328.

    Science.gov (United States)

    1978-02-02

    of his age. This news came as no surprise to Mikhail Vasil’yevich, and he reported to the commander his readiness immediately to begin the transfer...for this period, in which regard he dropped a hint to the commander. 103 Major Labushev did not take the hint. Convinced that there was no mutual...District. It was directed thence to the garrison judge advocate, who sent Mikhail Vasil’yevich an encouraging letter. It declared that his unit

  7. Ubiquitylation and degradation of elongating RNA polymerase II

    DEFF Research Database (Denmark)

    Wilson, Marcus D; Harreman, Michelle; Svejstrup, Jesper Q

    2013-01-01

    During its journey across a gene, RNA polymerase II has to contend with a number of obstacles to its progression, including nucleosomes, DNA-binding proteins, DNA damage, and sequences that are intrinsically difficult to transcribe. Not surprisingly, a large number of elongation factors have....... In this review, we describe the mechanisms and factors responsible for the last resort mechanism of transcriptional elongation. This article is part of a Special Issue entitled: RNA polymerase II Transcript Elongation....

  8. Evaluation of use of MPAD trajectory tape and number of orbit points for orbiter mission thermal predictions

    Science.gov (United States)

    Vogt, R. A.

    1979-01-01

    The application of using the mission planning and analysis division (MPAD) common format trajectory data tape to predict temperatures for preflight and post flight mission analysis is presented and evaluated. All of the analyses utilized the latest Space Transportation System 1 flight (STS-1) MPAD trajectory tape, and the simplified '136 note' midsection/payload bay thermal math model. For the first 6.7 hours of the STS-1 flight profile, transient temperatures are presented for selected nodal locations with the current standard method, and the trajectory tape method. Whether the differences are considered significant or not depends upon the view point. Other transient temperature predictions are also presented. These results were obtained to investigate an initial concern that perhaps the predicted temperature differences between the two methods would not only be caused by the inaccuracies of the current method's assumed nominal attitude profile but also be affected by a lack of a sufficient number of orbit points in the current method. Comparison between 6, 12, and 24 orbit point parameters showed a surprising insensitivity to the number of orbit points.

  9. CD3+/CD16+CD56+ cell numbers in peripheral blood are correlated with higher tumor burden in patients with diffuse large B-cell lymphoma

    Directory of Open Access Journals (Sweden)

    Anna Twardosz

    2011-04-01

    Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result

  10. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  11. Measuring happiness in large population

    Science.gov (United States)

    Wenas, Annabelle; Sjahputri, Smita; Takwin, Bagus; Primaldhi, Alfindra; Muhamad, Roby

    2016-01-01

    The ability to know emotional states for large number of people is important, for example, to ensure the effectiveness of public policies. In this study, we propose a measure of happiness that can be used in large scale population that is based on the analysis of Indonesian language lexicons. Here, we incorporate human assessment of Indonesian words, then quantify happiness on large-scale of texts gathered from twitter conversations. We used two psychological constructs to measure happiness: valence and arousal. We found that Indonesian words have tendency towards positive emotions. We also identified several happiness patterns during days of the week, hours of the day, and selected conversation topics.

  12. Gauge theory for baryon and lepton numbers with leptoquarks.

    Science.gov (United States)

    Duerr, Michael; Fileviez Pérez, Pavel; Wise, Mark B

    2013-06-07

    Models where the baryon (B) and lepton (L) numbers are local gauge symmetries that are spontaneously broken at a low scale are revisited. We find new extensions of the standard model which predict the existence of fermions that carry both baryon and lepton numbers (i.e., leptoquarks). The local baryonic and leptonic symmetries can be broken at a scale close to the electroweak scale and we do not need to postulate the existence of a large desert to satisfy the experimental constraints on baryon number violating processes like proton decay.

  13. Number Sense on the Number Line

    Science.gov (United States)

    Woods, Dawn Marie; Ketterlin Geller, Leanne; Basaraba, Deni

    2018-01-01

    A strong foundation in early number concepts is critical for students' future success in mathematics. Research suggests that visual representations, like a number line, support students' development of number sense by helping them create a mental representation of the order and magnitude of numbers. In addition, explicitly sequencing instruction…

  14. Coding strategies in number space : Memory requirements influence spatial-numerical associations

    NARCIS (Netherlands)

    Lindemann, Oliver; Abolafia, Juan M.; Pratt, Jay; Bekkering, Harold

    The tendency to respond faster with the left hand to relatively small numbers and faster with the right hand to relatively large numbers (spatial numerical association of response codes, SNARC effect) has been interpreted as an automatic association of spatial and numerical information. We

  15. Large carnivores, moose, and humans: A changing paradigm of predator management in the 21st century

    Science.gov (United States)

    Schwartz, Charles C.; Swenson, J.E.; Miller, Sterling D.

    2003-01-01

    We compare and contrast the evolution of human attitudes toward large carnivores between Europe and North America. In general, persecution of large carnivores began much earlier in Europe than North America. Likewise, conservation programs directed at restoration and recovery appeared in European history well before they did in North America. Together, the pattern suggests there has been an evolution in how humans perceive large predators. Our early ancestors were physically vulnerable to large carnivores and developed corresponding attitudes of respect, avoidance, and acceptance. As civilization evolved and man developed weapons, the balance shifted. Early civilizations, in particular those with pastoral ways, attempted to eliminate large carnivores as threats to life and property. Brown bears (Ursus arctos) and wolves (Canis lupus) were consequently extirpated from much of their range in Europe and in North America south of Canada. Efforts to protect brown bears began in the late 1880s in some European countries and population reintroductions and augmentations are ongoing. They are less controversial than in North America. On the other hand, there are no wolf introductions, as has occurred in North America, and Europeans have a more negative attitude towards wolves. Control of predators to enhance ungulate harvest varies. In Western Europe, landowners own the hunting rights to ungulates. In the formerly communistic Eastern European countries and North America, hunting rights are held in common, although this is changing in some Eastern European countries. Wolf control to increase harvests of moose (Alces alces) occurs in parts of North America and Russia; bear control for similar reasons only occurs in parts of North America. Surprisingly, bears and wolves are not controlled to increase ungulates where private landowners have the hunting rights in Europe, although wolves were originally exterminated from these areas. Both the inability of scientific research to

  16. Language and number: a bilingual training study.

    Science.gov (United States)

    Spelke, E S; Tsivkin, S

    2001-01-01

    Three experiments investigated the role of a specific language in human representations of number. Russian-English bilingual college students were taught new numerical operations (Experiment 1), new arithmetic equations (Experiments 1 and 2), or new geographical or historical facts involving numerical or non-numerical information (Experiment 3). After learning a set of items in each of their two languages, subjects were tested for knowledge of those items, and new items, in both languages. In all the studies, subjects retrieved information about exact numbers more effectively in the language of training, and they solved trained problems more effectively than untrained problems. In contrast, subjects retrieved information about approximate numbers and non-numerical facts with equal efficiency in their two languages, and their training on approximate number facts generalized to new facts of the same type. These findings suggest that a specific, natural language contributes to the representation of large, exact numbers but not to the approximate number representations that humans share with other mammals. Language appears to play a role in learning about exact numbers in a variety of contexts, a finding with implications for practice in bilingual education. The findings prompt more general speculations about the role of language in the development of specifically human cognitive abilities.

  17. Summit surprises.

    Science.gov (United States)

    Myers, N

    1994-01-01

    A New Delhi Population Summit, organized by the Royal Society, the US National Academy of Sciences, the Royal Swedish Academy of Sciences, and the Indian National Science Academy, was convened with representation of 120 (only 10% women) scientists from 50 countries and about 12 disciplines and 43 national scientific academies. Despite the common assumption that scientists never agree, a 3000 word statement was signed by 50 prominent national figures and supported by 25 professional papers on diverse subjects. The statement proclaimed that stable world population and "prodigious planning efforts" are required for dealing with global social, economic, and environmental problems. The target should be zero population growth by the next generation. The statement, although containing many uncompromising assertions, was not as strong as a statement by the Royal Society and the US National Academy of Sciences released last year: that, in the future, science and technology may not be able to prevent "irreversible degradation of the environment and continued poverty," and that the capacity to sustain life on the planet may be permanently jeopardized. The Delhi statement was backed by professional papers highlighting several important issues. Dr Mahmoud Fathalla of the Rockefeller Foundation claimed that the 500,000 annual maternal deaths worldwide, of which perhaps 33% are due to "coathanger" abortions, are given far less attention than a one-day political event of 500 deaths would receive. Although biologically women have been given a greater survival advantage, which is associated with their reproductive capacity, socially disadvantaged females are relegated to low status. There is poorer nutrition and overall health care for females, female infanticide, and female fetuses are increasingly aborted in China, India, and other countries. The sex ratio in developed countries is 95-97 males to every 100 females, but in developing Asian countries the ratio is 105 males to 100 females. There are reports of 60-100 million missing females. The human species 12,000 years ago had a population of 6 million, a life expectancy of 20 years, and a doubling time of 8000 years; high birth rates were important for preservation of the species. Profertility attitudes are still prevalent today. Insufficient funds go to contraceptive research.

  18. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  19. Large-Nc quantum chromodynamics and harmonic sums

    Indian Academy of Sciences (India)

    In the large- limit of QCD, two-point functions of local operators become harmonic sums. I review some properties which follow from this fact and which are relevant for phenomenological applications. This has led us to consider a class of analytic number theory functions as toy models of large- QCD which also is ...

  20. Principles for selecting earthquake motions in engineering design of large dams

    Science.gov (United States)

    Krinitzsky, E.L.; Marcuson, William F.

    1983-01-01

    This report gives a synopsis of the various tools and techniques used in selecting earthquake ground motion parameters for large dams. It presents 18 charts giving newly developed relations for acceleration, velocity, and duration versus site earthquake intensity for near- and far-field hard and soft sites and earthquakes having magnitudes above and below 7. The material for this report is based on procedures developed at the Waterways Experiment Station. Although these procedures are suggested primarily for large dams, they may also be applicable for other facilities. Because no standard procedure exists for selecting earthquake motions in engineering design of large dams, a number of precautions are presented to guide users. The selection of earthquake motions is dependent on which one of two types of engineering analyses are performed. A pseudostatic analysis uses a coefficient usually obtained from an appropriate contour map; whereas, a dynamic analysis uses either accelerograms assigned to a site or specified respunse spectra. Each type of analysis requires significantly different input motions. All selections of design motions must allow for the lack of representative strong motion records, especially near-field motions from earthquakes of magnitude 7 and greater, as well as an enormous spread in the available data. Limited data must be projected and its spread bracketed in order to fill in the gaps and to assure that there will be no surprises. Because each site may have differing special characteristics in its geology, seismic history, attenuation, recurrence, interpreted maximum events, etc., as integrated approach gives best results. Each part of the site investigation requires a number of decisions. In some cases, the decision to use a 'least ork' approach may be suitable, simply assuming the worst of several possibilities and testing for it. Because there are no standard procedures to follow, multiple approaches are useful. For example, peak motions at

  1. Small-scale dynamo at low magnetic Prandtl numbers

    Science.gov (United States)

    Schober, Jennifer; Schleicher, Dominik; Bovino, Stefano; Klessen, Ralf S.

    2012-12-01

    The present-day Universe is highly magnetized, even though the first magnetic seed fields were most probably extremely weak. To explain the growth of the magnetic field strength over many orders of magnitude, fast amplification processes need to operate. The most efficient mechanism known today is the small-scale dynamo, which converts turbulent kinetic energy into magnetic energy leading to an exponential growth of the magnetic field. The efficiency of the dynamo depends on the type of turbulence indicated by the slope of the turbulence spectrum v(ℓ)∝ℓϑ, where v(ℓ) is the eddy velocity at a scale ℓ. We explore turbulent spectra ranging from incompressible Kolmogorov turbulence with ϑ=1/3 to highly compressible Burgers turbulence with ϑ=1/2. In this work, we analyze the properties of the small-scale dynamo for low magnetic Prandtl numbers Pm, which denotes the ratio of the magnetic Reynolds number, Rm, to the hydrodynamical one, Re. We solve the Kazantsev equation, which describes the evolution of the small-scale magnetic field, using the WKB approximation. In the limit of low magnetic Prandtl numbers, the growth rate is proportional to Rm(1-ϑ)/(1+ϑ). We furthermore discuss the critical magnetic Reynolds number Rmcrit, which is required for small-scale dynamo action. The value of Rmcrit is roughly 100 for Kolmogorov turbulence and 2700 for Burgers. Furthermore, we discuss that Rmcrit provides a stronger constraint in the limit of low Pm than it does for large Pm. We conclude that the small-scale dynamo can operate in the regime of low magnetic Prandtl numbers if the magnetic Reynolds number is large enough. Thus, the magnetic field amplification on small scales can take place in a broad range of physical environments and amplify week magnetic seed fields on short time scales.

  2. Small-scale dynamo at low magnetic Prandtl numbers.

    Science.gov (United States)

    Schober, Jennifer; Schleicher, Dominik; Bovino, Stefano; Klessen, Ralf S

    2012-12-01

    The present-day Universe is highly magnetized, even though the first magnetic seed fields were most probably extremely weak. To explain the growth of the magnetic field strength over many orders of magnitude, fast amplification processes need to operate. The most efficient mechanism known today is the small-scale dynamo, which converts turbulent kinetic energy into magnetic energy leading to an exponential growth of the magnetic field. The efficiency of the dynamo depends on the type of turbulence indicated by the slope of the turbulence spectrum v(ℓ)∝ℓ^{ϑ}, where v(ℓ) is the eddy velocity at a scale ℓ. We explore turbulent spectra ranging from incompressible Kolmogorov turbulence with ϑ=1/3 to highly compressible Burgers turbulence with ϑ=1/2. In this work, we analyze the properties of the small-scale dynamo for low magnetic Prandtl numbers Pm, which denotes the ratio of the magnetic Reynolds number, Rm, to the hydrodynamical one, Re. We solve the Kazantsev equation, which describes the evolution of the small-scale magnetic field, using the WKB approximation. In the limit of low magnetic Prandtl numbers, the growth rate is proportional to Rm^{(1-ϑ)/(1+ϑ)}. We furthermore discuss the critical magnetic Reynolds number Rm_{crit}, which is required for small-scale dynamo action. The value of Rm_{crit} is roughly 100 for Kolmogorov turbulence and 2700 for Burgers. Furthermore, we discuss that Rm_{crit} provides a stronger constraint in the limit of low Pm than it does for large Pm. We conclude that the small-scale dynamo can operate in the regime of low magnetic Prandtl numbers if the magnetic Reynolds number is large enough. Thus, the magnetic field amplification on small scales can take place in a broad range of physical environments and amplify week magnetic seed fields on short time scales.

  3. Light U(1) gauge boson coupled to baryon number

    International Nuclear Information System (INIS)

    Carone, C.D.; Murayama, Hitoshi

    1995-06-01

    The authors discuss the phenomenology of a light U(1) gauge boson, γ B , that couples only to baryon number. Gauging baryon number at high energies can prevent dangerous baryon-number violating operators that may be generated by Planck scale physics. However, they assume at low energies that the new U(1) gauge symmetry is spontaneously broken and that the γ B mass m B is smaller than m z . They show for m Υ B z that the γB coupling α B can be as large as ∼ 0.1 without conflicting with the current experimental constraints. The authors argue that α B ∼ 0.1 is large enough to produce visible collider signatures and that evidence for the γ B could be hidden in existing LEP data. They show that there are realistic models in which mixing between the γ B and the electroweak gauge bosons occurs only as a radiative effect and does not lead to conflict with precision electroweak measurements. Such mixing may nevertheless provide a leptonic signal for models of this type at an upgraded Tevatron

  4. The Story Behind the Numbers: Lessons Learned from the Integration of Monitoring Resources in Addressing an ISS Water Quality Anomaly

    Science.gov (United States)

    McCoy, Torin; Flint, Stephanie; Straub, John, II; Gazda, Dan; Schultz, John

    2011-01-01

    Beginning in June of 2010 an environmental mystery was unfolding on the International Space Station (ISS). The U.S. Water Processor Assembly (WPA) began to produce water with increasing levels of total organic carbon (TOC). A surprisingly consistent upward TOC trend was observed through weekly in-flight total organic carbon analyzer (TOCA) monitoring. As TOC is a general organics indicator, return of water archive samples was needed to make better-informed crew health decisions and to aid in WPA troubleshooting. TOCA-measured TOC was more than halfway to its health-based screening limit before archive samples could be returned on Soyuz 22 and analyzed. Although TOC was confirmed to be elevated, somewhat surprisingly, none of the typical target compounds were the source. After some solid detective work, it was confirmed that the TOC was associated with a compound known as dimethylsilanediol (DMSD). DMSD is believed to be a breakdown product of silicon-containing compounds present on ISS. A toxicological limit was set for DMSD and a forward plan developed for operations given this new understanding of the source of the TOC. This required extensive coordination with ISS stakeholders and innovative use of available in-flight and archive monitoring resources. Behind the numbers and scientific detail surrounding this anomaly, there exists a compelling story of multi-disciplinary awareness, teamwork, and important environmental lessons learned.

  5. Metaproteomics of cellulose methanisation under thermophilic conditions reveals a surprisingly high proteolytic activity.

    Science.gov (United States)

    Lü, Fan; Bize, Ariane; Guillot, Alain; Monnet, Véronique; Madigou, Céline; Chapleur, Olivier; Mazéas, Laurent; He, Pinjing; Bouchez, Théodore

    2014-01-01

    Cellulose is the most abundant biopolymer on Earth. Optimising energy recovery from this renewable but recalcitrant material is a key issue. The metaproteome expressed by thermophilic communities during cellulose anaerobic digestion was investigated in microcosms. By multiplying the analytical replicates (65 protein fractions analysed by MS/MS) and relying solely on public protein databases, more than 500 non-redundant protein functions were identified. The taxonomic community structure as inferred from the metaproteomic data set was in good overall agreement with 16S rRNA gene tag pyrosequencing and fluorescent in situ hybridisation analyses. Numerous functions related to cellulose and hemicellulose hydrolysis and fermentation catalysed by bacteria related to Caldicellulosiruptor spp. and Clostridium thermocellum were retrieved, indicating their key role in the cellulose-degradation process and also suggesting their complementary action. Despite the abundance of acetate as a major fermentation product, key methanogenesis enzymes from the acetoclastic pathway were not detected. In contrast, enzymes from the hydrogenotrophic pathway affiliated to Methanothermobacter were almost exclusively identified for methanogenesis, suggesting a syntrophic acetate oxidation process coupled to hydrogenotrophic methanogenesis. Isotopic analyses confirmed the high dominance of the hydrogenotrophic methanogenesis. Very surprising was the identification of an abundant proteolytic activity from Coprothermobacter proteolyticus strains, probably acting as scavenger and/or predator performing proteolysis and fermentation. Metaproteomics thus appeared as an efficient tool to unravel and characterise metabolic networks as well as ecological interactions during methanisation bioprocesses. More generally, metaproteomics provides direct functional insights at a limited cost, and its attractiveness should increase in the future as sequence databases are growing exponentially.

  6. Economic considerations in the optimal size and number of reserve sites

    NARCIS (Netherlands)

    Groeneveld, R.A.

    2005-01-01

    The debate among ecologists on the optimal number of reserve sites under a fixed maximum total reserve area-the single large or several small (SLOSS) problem-has so far neglected the economic aspects of the problem. This paper argues that economic considerations can affect the optimal number and

  7. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  8. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  9. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  10. Mach Number effects on turbulent superstructures in wall bounded flows

    Science.gov (United States)

    Kaehler, Christian J.; Bross, Matthew; Scharnowski, Sven

    2017-11-01

    Planer and three-dimensional flow field measurements along a flat plat boundary layer in the Trisonic Wind Tunnel Munich (TWM) are examined with the aim to characterize the scaling, spatial organization, and topology of large scale turbulent superstructures in compressible flow. This facility is ideal for this investigation as the ratio of boundary layer thickness to test section spanwise extent ratio is around 1/25, ensuring minimal sidewall and corner effects on turbulent structures in the center of the test section. A major difficulty in the experimental investigation of large scale features is the mutual size of the superstructures which can extend over many boundary layer thicknesses. Using multiple PIV systems, it was possible to capture the full spatial extent of large-scale structures over a range of Mach numbers from Ma = 0.3 - 3. To calculate the average large-scale structure length and spacing, the acquired vector fields were analyzed by statistical multi-point methods that show large scale structures with a correlation length of around 10 boundary layer thicknesses over the range of Mach numbers investigated. Furthermore, the average spacing between high and low momentum structures is on the order of a boundary layer thicknesses. This work is supported by the Priority Programme SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft.

  11. Latin America: how a region surprised the experts.

    Science.gov (United States)

    De Sherbinin, A

    1993-02-01

    In 1960-1970, family planning specialists and demographers worried that poverty, limited education, Latin machismo, and strong catholic ideals would obstruct family planning efforts to reduce high fertility in Latin America. It had the highest annual population growth rate in the world (2.8%), which would increase the population 2-fold in 25 years. Yet, the UN's 1992 population projection for Latin America and the Caribbean in the year 2000 was about 20% lower than its 1963 projection (just over 500 vs. 638 million). Since life expectancy increased simultaneously from 57 to 68 years, this reduced projection was caused directly by a large decline in fertility from 5.9 to 3. A regression analysis of 11 Latin American and Caribbean countries revealed that differences in the contraceptive prevalence rates accounted for 90% of the variation in the total fertility rate between countries. Thus, contraception played a key role in the fertility decline. The second most significant determinant of fertility decline was an increase in the average age at first marriage from about 20 to 23 years. Induced abortion and breast feeding did not contribute significantly to fertility decline. The major socioeconomic factors responsible for the decline included economic development and urbanization, resulting in improvements in health care, reduced infant and child mortality, and increases in female literacy, education, and labor force participation. Public and private family planning programs also contributed significantly to the decline. They expanded from cities to remote rural areas, thereby increasing access to contraception. By the early 1990s, Brazil, Mexico, and Colombia had among the lowest levels of unmet need (13-24%) in developing countries. Other key factors of fertility decline were political commitment, strong communication efforts, and stress on quality services. Latin America provides hope to other regions where religion and culture promote a large family size.

  12. BRST cohomology of the superstring at arbitrary ghost number

    International Nuclear Information System (INIS)

    Horowitz, G.T.; Myers, R.C.; Martin, S.P.

    1989-01-01

    We investigate the cohomology of the BRST operator of the NSR superstring. No restriction is placed on the ghost number of the states. It is shown that every cohomology class can be written as a picture changed version of one of the known cohomology classes at a fixed ghost number. A generalization of this result is also found for the cohomology in the large algebra of a new bosonization of the superconformal ghosts. (orig.)

  13. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  14. Measurement of the Mass of an Object Hanging from a Spring--Revisited

    Science.gov (United States)

    Serafin, Kamil; Oracz, Joanna; Grzybowski, Marcin; Koperski, Maciej; Sznajder, Pawel; Zinkiewicz, Lukasz; Wasylczyk, Piotr

    2012-01-01

    In an open competition, students were to determine the mass of a metal cylinder hanging on a spring inside a transparent enclosure. With the time for experiments limited to 24 h due to the unexpectedly large number of participants, a few surprisingly accurate results were submitted, the best of them differing by no more than 0.5% from the true…

  15. Achieving online consent to participation in large-scale gene-environment studies: a tangible destination

    NARCIS (Netherlands)

    Wood, F.; Kowalczuk, J.; Elwyn, G.; Mitchell, C.; Gallacher, J.

    2011-01-01

    BACKGROUND: Population based genetics studies are dependent on large numbers of individuals in the pursuit of small effect sizes. Recruiting and consenting a large number of participants is both costly and time consuming. We explored whether an online consent process for large-scale genetics studies

  16. On the chromatic number of general Kneser hypergraphs

    DEFF Research Database (Denmark)

    Alishahi, Meysam; Hajiabolhassan, Hossein

    2015-01-01

    In a break-through paper, Lovász [20] determined the chromatic number of Kneser graphs. This was improved by Schrijver [27], by introducing the Schrijver subgraphs of Kneser graphs and showing that their chromatic number is the same as that of Kneser graphs. Alon, Frankl, and Lovász [2] extended...... their chromatic number as an approach to a supposition of Ziegler [35] and a conjecture of Alon, Drewnowski, and Łuczak [3]. In this work, our second main result is to improve this by computing the chromatic number of a large family of Schrijver hypergraphs. Our last main result is to prove the existence...... of a completely multicolored complete bipartite graph in every coloring of a graph which extends a result of Simonyi and Tardos [29].The first two results are proved using a new improvement of the Dol'nikov-Kříž [7,18] bound on the chromatic number of general Kneser hypergraphs....

  17. The Influence of Company Size on Accounting Information: Evidence in Large Caps and Small Caps Companies Listed on BM&FBovespa

    Directory of Open Access Journals (Sweden)

    Karen Yukari Yokoyama

    2015-09-01

    Full Text Available In this study, the relation between accounting information aspects and the capitalization level o companies listed on the São Paulo Stock Exchange was investigated, classified as Large Caps or Small Caps, companies with larger and smaller capitalization, respectively, between 2010 and 2012. Three accounting information measures were addressed: informativeness, conservatism and relevance, through the application of Easton and Harris’ (1991 models of earnings informativeness, Basu’s (1997 model of conditional conservatism and the value relevance model, based on Ohlson (1995. The results appointed that, although the Large Caps present a higher level of conservatism, their accounting figures were less informative and more relevant when compared to the Large Caps companies. Due to the greater production of private information (predisclosure surrounding larger companies, the market would tend to respond less strongly or surprised to the publication of these companies’ accounting information, while the lack of anticipated information would make the effect of disclosing these figures more preponderant for the Small Caps companies.

  18. Phenomenology of large Nc QCD

    International Nuclear Information System (INIS)

    Lebed, R.F.

    1999-01-01

    These lectures are designed to introduce the methods and results of large N c QCD in a presentation intended for nuclear and particle physicists alike. Beginning with definitions and motivations of the approach, we demonstrate that all quark and gluon Feynman diagrams are organized into classes based on powers of 1/N c . We then show that this result can be translated into definite statements about mesons and baryons containing arbitrary numbers of constituents. In the mesons, numerous well-known phenomenological properties follow as immediate consequences of simply counting powers of N c , while for the baryons, quantitative large N c analyses of masses and other properties are seen to agree with experiment, even when 'large' N c is set equal to its observed value of 3. Large N c reasoning is also used to explain some simple features of nuclear interactions. (author)

  19. Phenomenology of large Nc QCD

    International Nuclear Information System (INIS)

    Richard Lebed

    1998-01-01

    These lectures are designed to introduce the methods and results of large N c QCD in a presentation intended for nuclear and particle physicists alike. Beginning with definitions and motivations of the approach, they demonstrate that all quark and gluon Feynman diagrams are organized into classes based on powers of 1/N c . They then show that this result can be translated into definite statements about mesons and baryons containing arbitrary numbers of constituents. In the mesons, numerous well-known phenomenological properties follow as immediate consequences of simply counting powers of N c , while for the baryons, quantitative large N c analyses of masses and other properties are seen to agree with experiment, even when ''large'' N c is set equal to its observed value of 3. Large N c reasoning is also used to explain some simple features of nuclear interactions

  20. Connecting Numbers with Emotion: Review of Numbers and Nerves: Information, Emotion, and Meaning in a World of Data by Scott Slovic and Paul Slovic (2015

    Directory of Open Access Journals (Sweden)

    Samuel L. Tunstall

    2017-01-01

    Full Text Available Scott Slovic and Paul Slovic (Eds.. Numbers and Nerves: Information, Emotion, and Meaning in a World of Data (Corvallis, OR: Oregon State University Press, 2015. 272 pp. ISBN 978-0-87071-776-5. It is common to view quantitative literacy as reasoning with respect to numbers. In Numbers and Nerves, the contributors to the volume make clear that we should attend not only to how students consciously reason with numbers, but also how our innate biases influence our actions when faced with numbers. Beginning with the concepts of psychic numbing, and then psuedoinefficacy, the contributors to the volume examine how our behaviors when faced with large numbers are often not mathematically rational. I consider the implications of these phenomena for the Numeracy community.